Brian Huot

Brian Huot

(Re)Articulating Writing Assessment for Teaching and Learning

…holistic scoring were developed by testing companies based upon theoretical and epistemological positions that do not reflect current knowledge of literacy and its teaching. 8

Students and teachers have seldom recognized or been able to harness its potential to improve teaching and learning.  In fact, assessment has often been seen as a negative, disruptive feature for the teaching of writing. 9

However, no English or composition scholars played a major role in the development of holistic scoring.  It was a procedure devised to ensure reliable scoring among independent readers, since reliability as a “necessary but insufficient condition for validity”(Cherry and Meyer 1993, 110) is a corner stone of traditional measurement that spawned multiple choice tests and the entire testing culture and a mentality that has become such an important part of current ideas about education. 24

The benefits of holistic scoring for teachers go beyond an attitude change toward assessment and provide them with models for assignment construction, fair grading practices, and the articulation of clear course goals. 32

The contradictory impulses in White’s essay are what I take as a love/hate relationship not only within White’s notion of reliability, but evident even in the way that college English views writing assessment and the researchers who developed methods like holistic scoring.33

Like White, Yancey sees the history of writing assessment as a struggle between teachers and testers: “the last fifty years of writing assessment can be narrativized as the teacher-layperson (often successfully) challenging the (psychometric) expert” (484). 34

Camp explains that eventually research indicated that although student scores on multiple choice tests and essay exams would be similar, that these “formats,” as Camp calls them, were ultimately measuring different “skills”. 40

The questions for validity is no longer just whether or not a test measures what it purports to measure but rather “whether our assessments adequately represent writing as we understand it” (61). 41

Information about decisions to be made and actions to be taken need to be supplied for each use of the assessment, negating not only a simple declaration of validity for a specific type of assessment, but introducing the necessity of supplying empirical and theoretical evidence of validity for specific environments, populations and curricula. 50

…validity is a way that “the inquiry lens is turned back on researchers and program developers themselves as stakeholders, encouraging critical reflection about their own theories and practices” (Moss 1998, 119). 51

In other words, a multiple choice test becomes a viable way of assessing writing because it is technologically possible, satisfying the technical need for reliability, though it may not contain any writing.  Writing assessment has been predominantly constructed as a technical problem requiring a technological solution. 144

In fact, many teachers see assessment as a negative force because so many current assessment practices do not even attempt to address teaching and learning, yet they nonetheless narrow or guide teachers and programs are linked to student performance on assessment measures. 150

Advertisements
Be the first to start a conversation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: