Part IV: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Jessica Nastal-Dema, Prairie State College

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing assessment in the 21st century:  Essays in honor of Edward M. White.  New York, NY:  Hampton Press. 


Note: This is the final installment in a series of reviews: see Section I; Section II; and Section III.


I imagine readers of Writing Assessment in the 21st Century: Essays in Honor of Edward M. White will each take away something different from their interactions with the text. It could be used as a primer on writing assessment for graduate students, experienced instructors, and WPAs who seek to learn more about the field. It can serve as an introduction to educational measurement for those of us more comfortable on the Rhetoric and Composition/Writing Studies side of things. It’s a collection of important research by some of the field’s most prominent scholars. It is a significant resource, one I have turned to several times since its publication.

As I began writing this final, delayed installment of my review, I went back to the words that first struck me when I opened Writing Assessment in the 21st Centuryupon its publication in 2012: “Where would we be without each other? The effort to design, develop, and complete this collection reveals the strength of community” (xi).

This book ushered me into writing assessment in a number of ways. As a PhD student, I tried to get my hands on anything I could by the authors whose writing spoke to me. I mined the bibliographies in many of the twenty-seven chapters to prepare for my qualifying exams and later, my dissertation. I felt encouraged to make connections between historical approaches to admissions and placement practices and what I was observing in twenty-first century urban settings with diverse student bodies. I was energized to continue operating under the assumption that assessment can interact with curricula and classroom practice in generative ways.

I realized that my work, regardless of how different it was from my peers’, fit into a body of scholarship. I realized that I fit into a community of scholars, a community that welcomed me.

Norbert Elliot and Les Perelman helped me translate that realization from the page to my life. Norbert willingly struck up a correspondence with me in 2012, which led to his serving as a reader on my dissertation committee, and now, to his becoming a trusted mentor and friend. Les welcomed me as a guest on the CCCC Assessment Committee, and has been generous in acknowledging my work. While I still know Edward M. White mainly through his corpus, many of us meet him at sessions at the CWPA and CCCC, observe him mentoring graduate students, and read his frequent contributions to the WPA-Listserv. Together, these leaders guide emerging scholars. They show us the incredible range of possible inquiries into writing assessment. They demonstrate the power of collaboration. They, and this collection, embody the importance—the strength—of community.  


Section IV, “Toward a valid future: The uses and misuses of writing assessment” is the last in Norbert Elliot and Les Perelman’s edited collection, Writing Assessment in the 21stCentury, and each chapter confronts the tension between outdated methods of writing assessment and the view held by instructors and WPAs of “writing as a complex cognitive act” (p. 410).


Writing Assessment in the 21st Century brings “together the worlds of writing teachers and of writing assessment” (p. 499) as it makes clear that the educational measurement and academic communities are not always at odds, and have always had at least some shared concerns. And writers in this section continue to complicate those shared concerns in productive ways. This is the only section where each chapter is written by a member of the Rhetoric and Composition/Writing Studies community; importantly, as Elliot and Perelman make clear in their introduction (and does Elliot’s On a Scale: A Social History of Writing Assessment in America, 2007), the issues these writers confront have persisted throughout the entire history of writing assessment in the teaching and testing of writing. In 1937, the creator of the SAT, Carl Campbell Brigham, believed testing specialists and teachers could work together, but “his vision of a new testing organization was one that favored the trained teacher over the educational measurement specialist” – an idea many teachers and assessment practitioners favor today (p. 408). And Paul Diederich proposed using multiple samples of student writing “for valid and reliable writing assessment” in 1974 (p. 408), which the academic side of assessment supports each time we require students to submit a portfolio of their work.

Les Perelman leads the pack here, in his powerfully written and convincing, “Mass-Market Writing Assessments as Bullshit” (Chapter 24). Perelman’s chapter is incendiary. His argument – that “[e]ducation should be the enemy of bullshit” seems neutral enough (p. 427). If, however, we view the educational landscape while considering the opening line of Harry G. Frankfurter’s On Bullshit, as Perelman invites us to do, we can see how controversial his position becomes: “One of the most salient features of our culture is that there is so much bullshit” (Frankfurter qtd. in Perelman, p. 426). Writing assessment has the potential to improve our teaching, writing programs, and the student learning that takes place within them. Perelman claims, however, as White did in his infamous “My Five-Paragraph Theme Theme,” it is overrun by bullshit, in the reports mass-market testing organizations distribute to drum up support for their cheap and fast—and effective!—methods, in the writing it encourages from students who are “not penalize[d]…for presenting incorrect information,” and in the scoring sessions more concerned with standardization than carefully reading student writing (p. 427). Ultimately, mass-market testing organizations are driven by “an obese bottom line on the balance sheet,” not “having students display and use knowledge, modes of analysis, or both” (p. 435; p. 429). After reading this chapter, I imagine readers will also want to explore the NCTE Position Statement on Machine Scoring and the “Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment”, in addition to Farley (2009) and Lemann (1999).  

The remaining chapters in this section provide some suggestions on how WPAs might challenge the “bullshit effect.” As rhetoricians, we know that language can represent and reinforce social power structures, which Cindy Moore (Chapter 26), Peggy O’Neill (Chapter 25), and Richard Haswell (Chapter 23) take up. each chapter in this section highlights the absolute necessity of collaborating with our colleagues and of communicating with people outside our communities.

Haswell discusses how WPAs are better positioned to design, report on, and control assessments that focus on students’ needs by embracing, not fearing or rejecting, numbers. While there are many reasons teacher-scholars have traditionally resisted numbers about writing, the most prominent is that quantitative data provides a limited perspective of students’ writing abilities. For many, numbers can only provide an abstraction of the complexities of writing. Haswell argues, however, that the more WPAs use numbers and data within their programs, the more able they will be to “stave off outside assessment” (p. 414). Numbers can be powerfully convincing; as such, Haswell claims we should “fight numbers with numbers” and be prepared with quantitative data to be more persuasive, with data and numbers that support our concerns and values about writing (p. 414). I agree with Haswell, and with White that the more we can do assessment, the more we can do with assessment. Cindy Moore insightfully explains the precarious position of a WPA and of writing faculty, and how using ambiguous, field-centric terms may, in fact, reduce our efficacy. While scholars like Patricia Lynne (2004) argue against using the term “validity” because of its association with the positivist tradition, Moore claims it is precisely because of this tradition that it holds such weight in our cultural, interdisciplinary, and institutional conversations. If WPAs were to use a different term, like Lynne’s “meaningfulness,” we would lose credibility with the very people with whom we need to establish it. O’Neill continues the work of Reframing Writing Assessment (2010) and examines WAC/WID programs at two universities to demonstrate how a frame of writing assessment influences “how others understand writing and writing assessment as well as the role of composition and rhetoric in the academy” (p. 450). As such, it is crucial for WPAs and those of us in Rhetoric and Composition/Writing Studies to use writing assessment not to further the bullshit Perelman sees, but instead, to shape the conversations about what it means to teach writing.  

Finally, Kathleen Blake Yancey continues the work of her foundational 1999 article, “Historicizing Writing Assessment,” as she discusses the current rhetorical situation of writing assessment (Chapter 27). While the third wave of writing assessment allowed for changes at the local level due to individual programs developing their own outcomes and assessments, Yancey sees the fourth wave characterized by collaborative practices that transcend a specific context. Rather than responding onlyto local issues, the collaborative models allow participants to align some practices and invent others, which can be critical in this era of increased participation in assessment by the federal government and institutional bodies.

I find Yancey’s position to be intriguing. Locally-controlled assessments are not a panacea; however, they certainly have much more face validity than mass-market exams, and they may offer us more opportunities to carefully examine how our practices affect our diverse bodies of students. I also see the benefit of frameworks like the WPA Outcomes Statement and the Standards for Educational and Psychological Testing (2014) in guiding and shaping the field, in serving as a foundation and touch-point for the wide range of writing class instructors. In my own work, for instance, the WPA Outcomes Statement has been of great use to discuss with different departments on campus what writing is and does and can be. Perelman and Elliot explain, “[t]his new model, independent of any specific local need, is located within multiple, diverse communities” (p. 411). I believe it is from understanding these multiple, diverse communities that we can improve our writing assessments and classroom practices.

References

Adler-Kassner, L., and O’Neill, P. (2010). Reframing writing assessment to improve
teaching and learning. Logan, UT: Utah State U P.

AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. 


Council of Writing Program Administrators (2014). WPA outcomes statement for First-Year Composition (Revisions adopted 17 July 2014). WPA: Writing Program Administration, 38, 142–146.

Elliot, N. (2007). On a scale: A social history of writing assessment in America. New York: Peter Lang.


Farley, T. (2009). Making the grade: My misadventures in the standardized testing industry. Sausalito, CA: PoliPoint Press.

Frankfurt, H. G. (2004). On bullshit. Princeton, NJ and Oxford, UK: Princeton UP. 


Lemann, N. (1999). The big test: The secret history of the American meritocracy. New York, NY: Farrar, Straus, and Giroux. 

Lynne, P. (2004). Coming to terms: A theory of writing assessment. Logan, UT: Utah

State U P.

O’Neill, P., Moore, C., and Huot, B. (2009). Guide to College Writing Assessment.
Logan, UT: Utah State U P.

White, E. M. (2008). My five-paragraph-theme theme. College Composition and Communications, 59(3), 524-5. 

Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50(3): 483-503. 





Source: jwa