Part III: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Part III:  Review of Norbert Elliot’s and Les Perelman’s (Eds.)  Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White

 

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee
This is the third review in a series of five about Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White, edited by Norbert Elliot and Les Perelman.  The collection is a “testament to White’s ability to work across disciplinary boundaries” as it includes contributions from the writing studies (including the National Writing Project, writing centers, classroom instruction, and writing programs) and educational measurement communities (p. 2).  It is also a snapshot – or a series of snapshots, since it is over 500 pages – of contemporary interests in and concerns about writing assessment; an update on Writing Assessment: Politics, Policies,Practices (1996), edited by White, William Lutz, and Sandra Kamusikiri.

Each chapter in Part III, “Consequence in Contemporary Writing Assessment:  Impact as Arbiter,” drives toward the last sentence of the last chapter in the section, written by Liz Hamp-Lyons:  “You cannot build a sturdy house with only one brick” (p.395).  Elliot and Perelman highlight the section’s dedication to the question of agency, in Edward M. White’s words as the “rediscovery of the functioning human being behind the text” (qtd. p. 371).  I also see the authors in Part III as demonstrating their dedication to understanding the variety of methods and interpretations and social consequences of writing assessment.

Elbow pauses in his “Good Enough Evaluation” and writes, “I seem to be on the brink of saying what any good postmodern theorist would say: there is no such thing as fairness; let’s stop pretending we can have it or even try for it” (p. 305).  He doesn’t cross that brink, of course, and the writers in this section discuss how writing assessment in the twenty-first century might strive for building sturdy houses with many bricks of various shapes and sizes.

In Chapter 17, Peter Elbow urges teachers and administrators of writing to consider “good enough evaluation,” not as a way to get us off the hook of careful evaluation, but as a way to rediscover the human being both writing and reading the text.  In the spirit of White’s practical and realistic forty-year approach, Elbow reminds us that the “value of writing is necessarily value for readers”; and yes, this even means teachers of writing (p. 310). He concludes by explaining that using such evaluation could result in evaluation sessions with “no pretense at ‘training’ or ‘calibrating’ [readers] to make them ignore their own values” (p. 321).

Elliot and Perelman have set up another interesting contrast in Part III:  While many readers will agree with Elbow (how can we not?!), we might have some questions about how this good enough evaluation works in practice, which Doug Baldwin helps to highlight.  How is it that the results become “more trustworthy” through this process (p. 319)?  What makes Directed Self Placement the “most elegant and easy” alternative to placement testing (p. 317; Royer and Gilles discuss the public and private implications of DSP in Chapter 20)?  What impact would multidimensional grading grids, instead of GPAs, have on reading student transcripts (pp. 316-317)?  Baldwin helps to ask how we can ensure the “technical quality” of Elbow’s ideal – though non-standardized – evaluations (p. 327).

For Baldwin, fairness, a concept authors of this section are dedicated to, “refers to assessment procedures that measure the same thing for all test-takers regardless of their membership in an identified subgroup” (p. 328).  He uses the chapter to expose instances that might display “face fairness” – allowing students to choose their prompt, use a computer, or use a dictionary – but that might reveal deeper unfairness for students.  Baldwin’s conclusion provides guidance for those of us concerned about the state of writing and writing assessment in the twenty first century, our diverse populations of students, and our “concerns about superimposing one culture’s definition of ‘good writing’ onto another culture’” (p. 336).

Asao B. Inoue and Mya Poe (Chapter 19), Gita DasBender (Chapter 21), and Liz Hamp-Lyons (Chapter 22) continue probing questions of agency, fairness, and local contexts.  The “generation 1.5” students DasBender worked with were confident in their literacy skills, identified as being highly motivated, and expressed satisfaction with their writing courses.  On the surface, it seemed like the mainstream writing courses served them well; however, instructors believed students “struggled to succeed” in them (p. 376).  DasBender observed, “generation 1.5 students’ self-perceptions as reflected in their DSP literacy profile…is at odds with” the abilities they demonstrate in mainstream writing courses (p. 383).

This conflict seems representative of some of the concerns about contemporary writing assessment in action.  What are programs to do when they employ theoretically sound, fair policies designed to enable student participation and responsibility (“asking them where they fit,” in Royer and Gilles’ words) but that seem to fail in the eyes of instructors or administrators?  DasBender, Elbow, Baldwin, Inoue, Poe, Royer, Gilles, and Hamp-Lyons remind us that while Writing Assessment in the 21st Century does much to situate writing assessment and Ed White’s role within it, we have more work to do on behalf of all our students – which Part IV:  “Toward a Valid Future” alludes to.

National Council of Teachers of English Position Statement on Machine Scoring

 

NCTE just released a statement about the use of automated essay scoring (AES) in writing assessment. The statement explains why AES shouldn’t be used for evaluating student writing, offers some alternatives, and includes an annotated bibliography of research of machine scoring of student writing. The bibliography is based on the JWA bibliography compiled by Haswell, Donnelly, Hester, O’Neill and Schendel, published in 2012.

What do you think of NCTE’s statement on machine scoring? How can it be useful? Does it go far enough? Is it solidly grounded in research? Let us know what you think.