Part III: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Part III:  Review of Norbert Elliot’s and Les Perelman’s (Eds.)  Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White


Elliot, N., & Perelman, L. (Eds.) (2012).  Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee
This is the third review in a series of five about Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White, edited by Norbert Elliot and Les Perelman.  The collection is a “testament to White’s ability to work across disciplinary boundaries” as it includes contributions from the writing studies (including the National Writing Project, writing centers, classroom instruction, and writing programs) and educational measurement communities (p. 2).  It is also a snapshot – or a series of snapshots, since it is over 500 pages – of contemporary interests in and concerns about writing assessment; an update on Writing Assessment: Politics, Policies,Practices (1996), edited by White, William Lutz, and Sandra Kamusikiri.

Each chapter in Part III, “Consequence in Contemporary Writing Assessment:  Impact as Arbiter,” drives toward the last sentence of the last chapter in the section, written by Liz Hamp-Lyons:  “You cannot build a sturdy house with only one brick” (p.395).  Elliot and Perelman highlight the section’s dedication to the question of agency, in Edward M. White’s words as the “rediscovery of the functioning human being behind the text” (qtd. p. 371).  I also see the authors in Part III as demonstrating their dedication to understanding the variety of methods and interpretations and social consequences of writing assessment.

Elbow pauses in his “Good Enough Evaluation” and writes, “I seem to be on the brink of saying what any good postmodern theorist would say: there is no such thing as fairness; let’s stop pretending we can have it or even try for it” (p. 305).  He doesn’t cross that brink, of course, and the writers in this section discuss how writing assessment in the twenty-first century might strive for building sturdy houses with many bricks of various shapes and sizes.

In Chapter 17, Peter Elbow urges teachers and administrators of writing to consider “good enough evaluation,” not as a way to get us off the hook of careful evaluation, but as a way to rediscover the human being both writing and reading the text.  In the spirit of White’s practical and realistic forty-year approach, Elbow reminds us that the “value of writing is necessarily value for readers”; and yes, this even means teachers of writing (p. 310). He concludes by explaining that using such evaluation could result in evaluation sessions with “no pretense at ‘training’ or ‘calibrating’ [readers] to make them ignore their own values” (p. 321).

Elliot and Perelman have set up another interesting contrast in Part III:  While many readers will agree with Elbow (how can we not?!), we might have some questions about how this good enough evaluation works in practice, which Doug Baldwin helps to highlight.  How is it that the results become “more trustworthy” through this process (p. 319)?  What makes Directed Self Placement the “most elegant and easy” alternative to placement testing (p. 317; Royer and Gilles discuss the public and private implications of DSP in Chapter 20)?  What impact would multidimensional grading grids, instead of GPAs, have on reading student transcripts (pp. 316-317)?  Baldwin helps to ask how we can ensure the “technical quality” of Elbow’s ideal – though non-standardized – evaluations (p. 327).

For Baldwin, fairness, a concept authors of this section are dedicated to, “refers to assessment procedures that measure the same thing for all test-takers regardless of their membership in an identified subgroup” (p. 328).  He uses the chapter to expose instances that might display “face fairness” – allowing students to choose their prompt, use a computer, or use a dictionary – but that might reveal deeper unfairness for students.  Baldwin’s conclusion provides guidance for those of us concerned about the state of writing and writing assessment in the twenty first century, our diverse populations of students, and our “concerns about superimposing one culture’s definition of ‘good writing’ onto another culture’” (p. 336).

Asao B. Inoue and Mya Poe (Chapter 19), Gita DasBender (Chapter 21), and Liz Hamp-Lyons (Chapter 22) continue probing questions of agency, fairness, and local contexts.  The “generation 1.5” students DasBender worked with were confident in their literacy skills, identified as being highly motivated, and expressed satisfaction with their writing courses.  On the surface, it seemed like the mainstream writing courses served them well; however, instructors believed students “struggled to succeed” in them (p. 376).  DasBender observed, “generation 1.5 students’ self-perceptions as reflected in their DSP literacy profile…is at odds with” the abilities they demonstrate in mainstream writing courses (p. 383).

This conflict seems representative of some of the concerns about contemporary writing assessment in action.  What are programs to do when they employ theoretically sound, fair policies designed to enable student participation and responsibility (“asking them where they fit,” in Royer and Gilles’ words) but that seem to fail in the eyes of instructors or administrators?  DasBender, Elbow, Baldwin, Inoue, Poe, Royer, Gilles, and Hamp-Lyons remind us that while Writing Assessment in the 21st Century does much to situate writing assessment and Ed White’s role within it, we have more work to do on behalf of all our students – which Part IV:  “Toward a Valid Future” alludes to.

National Council of Teachers of English Position Statement on Machine Scoring


NCTE just released a statement about the use of automated essay scoring (AES) in writing assessment. The statement explains why AES shouldn’t be used for evaluating student writing, offers some alternatives, and includes an annotated bibliography of research of machine scoring of student writing. The bibliography is based on the JWA bibliography compiled by Haswell, Donnelly, Hester, O’Neill and Schendel, published in 2012.

What do you think of NCTE’s statement on machine scoring? How can it be useful? Does it go far enough? Is it solidly grounded in research? Let us know what you think.

Part II: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_.

Part II:   Review of Norbert Elliot’s and Les Perelman’s (Eds). Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee

Norbert Elliot and Les Perleman’s subtitle for their introduction to Part II of Writing Assessment in the 21st Century: Essays in Honor of Edward M. White is “Bridging the Two Cultures,” specifically focusing on the cultures of the academic writing assessment community and the corporate educational community. As Ed White’s career illustrates, groundbreaking work can be done when we do bridge the seemingly vast divide. This section, however, makes it abundantly clear that the bridge assessment leaders advocate for will be difficult to build — especially in the midst of the “corporatization” of universities and writing assessment.[1]

The editors highlight how this section tackles “what may realistically be expected as we balance standards for good assessment practices…with the limits of available resources” as each author offers “practical alternatives, informed research design, and an advanced understanding of the construct of writing and of what is required to improve instructional practice” (p. 150). Central to Part II is recognition that writing instruction and assessment are inextricably linked, which is central to White’s body of work. As Irving Peckham reflects on attitudes toward writing assessment in California in the 1970s, he remarks, “I remember the argument at the time as being cost and efficiency against validity and consequence” (p.170). The authors of Section II echo Peckham’s sentiment as they explore academic and corporate discussions about assessment to provide assessment alternatives that address cost effectiveness, rather than simply cost-efficiency, to improve instruction (see William Condon, Chapter 13),

Contrastingly, however, is Jill Burstein of ETS’s chapter, “Fostering Best Practices in Writing Assessment and Instruction with E-rater.” Burstein appears to employ Lee Odell’s idea of the “given-new” exchange (see Chapter 16) by appealing to scholar-teachers’ dedication to the assessment loop and to generative reflection on our practices. Unfortunately, it reads like a chapter from ETS’s powerful marketing department, trying to sell readers on how E-rater and Criterion are “useful, helpful, and dependable,” in White’s words (this fact and these words are mentioned around 20 times in the 12-page chapter). Burstein claims the automated essay scoring technology “has the potential to enhance and support the writing experience of this large, and culturally and linguistically diverse population” (p. 205). She concludes the chapter by explaining how the programs can support teachers and instruction, not replace them, but this chapter leaves me questioning whether a cultural bridge can be built between corporate educational community and academic writing assessment community.

But as every other chapter makes clear, writing assessment — and White’s work — is built on a dedication to “better fit with our theories and values of language, reading, writing, research, and pluralist democracy” (Broad, p. 261), not to reduce writing to 9 characteristics of grammar, usage, and mechanics. I heed White’s call to work beyond our disciplinary boundaries, but I am wary. I wonder about the reach of ETS and its fellow organizations as their “cost-effective” practices marginalize multilingual and low-income students (see Anne Herrington & Charles Moran, Chapter 12). I am concerned about the increasing push for computer-based writing assessment and how that depersonalizes the acts of writing, interpretation, and communication. Finally, I don’t believe automated essay scoring provides “meaningful and consistent feedback” (p. 204), at least in its current state. Considering writing and assessment in such a limited way undermines, in Bob Broad’s language, “what we really value” about writing assessment.

As Broad and Diane Kelly-Riley (Chapter 8) demonstrate — and as Elliot and Perelman begin the collection — the academic writing assessment community is driven by our collaborations on local, regional, and national levels. It’s what motivates dynamic criteria mapping; what helped Washington State University develop a model, cross-campus portfolio assessment system and Colorado School of Mines engage in critical reflection about their interdisciplinary first-year courses; what encourages us to consider the roles of placement, curricula, response, online instruction, and the constantly-evolving world of writing. The conversations we have as a community, on the WPA-L and in departmental meetings about student writing, are essential to improve teaching and learning in theoretically and ethically sound ways.


Writing Assessment and Race Studies Sub Specie Aeternitatis: A Response to _Race and Writing Assessment_

Writing Assessment and Race Studies Sub Specie Aeternitatis: A Response to Race and Writing Assessment

By Rich Haswell, Haas Professor of English Emeritus, Texas A&M University, Corpus Christi

If understanding is impossible, knowing is imperative.

—Primo Levi

I was filling up my car when I noticed my traveling companion staring at a man in the next row of pumps, staring with fixed malevolence. The man was a stranger. He was also black. With alarm I realized my friend was executing the Southern hate stare. I had never witnessed a hate stare before, only having read about it in John Howard Griffin’s book Black Like Me. I was dumbfounded. We were in the middle of Wyoming, and my companion, a generation older than me, had lived nowhere east of Great Basin country. I had never seen him betray a shred of racism before. The stranger studiously disregarded the stare, finished filling up his car, and drove away. We drove away. Appropriately enough, the station we left was that cultural mix of neon, bad food, fuel, tourists, and truck drivers called Little America.

This incident happened thirty-seven years ago. It was the kind of racism—hateful, publicly hostile, of a kin with sunset laws and burning crosses—that we all pray a larger America has left behind. Racism itself, of course, we have not left behind. It has just diversified. A “new racism” is still present in a legion of forms: structural racism, institutionalized racism, benevolent racism, color-blind racism, scientific racism, culturalist racism, internalized racism, ethnopluralist racism, whitely racism, everyday racism, microaggressive racism—racisms more or less covert, more or less hateful. These terms are the fruit of the new race studies, which have emerged hand in hand with the new racism. Race studies study race in order to understand it better, in order to find ways to deal with people of all races with sympathy and equity, in order, eventually, it hopes, to eliminate racial inequalities everywhere. Equality, of course, may not exactly fit some of the ways that the study of race applies its findings—think, for instance, of affirmative action. So recently Florida’s state board of education set grade-level test goals in reading at 90% for Asian students, 88% for whites, 81% for Hispanics, and 74% for blacks. The members of the Florida board of education do not hate Asian students in setting their particular bar so high but rather hope that whites, Hispanics, and blacks, given time, will catch up.

In doing so, of course, the board affirmed the existence of race. Herein lies a major contradiction in the new race studies. Often they start with critique (“critical race theory”) that argues “race” is a figment, a social, political, and ideological construction, and then they end by affirming and even celebrating racial categories and groups. A common line of argument first deconstructs “race,” then attacks “racism,” and then confirms the equality of all races.

The contradiction runs through Race and Writing Assessment, edited by Asao B. Inoue and Mya Poe (Peter Lang, 2012). At the top of the book, Inoue and Poe state the current truism that “race” is “artificial,” not biological but rather a “social and political construction” (4). And indeed the book tends not to use the terms “racism” and “racist” at all,  preferring expressions such as “racialism,” “racialization,” “raciology,” and “racial formation” (exceptions are chapters by Valerie Balester, Nicholas Behm & Keith D. Miller, and Rachel Lewis Ketai). Yet over and over, as the book furthers projects and programs in writing assessment, it treats race as an objective reality, just as Florida’s board of education has done. In fact an early chapter in the book asks that registrars of USA universities record student’s race (Diane Kelly-Riley), and the last chapter wishes that France’s laws forbidding categorization by racial groups be abeyed for academic scholars (Élizabeth Bautier & Christiane Donahue). Despite Inoue’s start, nowhere afterward does the book deconstruct race. Puzzled, I ask a transgressive question. Does Race and Writing Assessment, in its commendable efforts to act with more fairness toward students of every color, help maintain racism? Let me state my question as a logical aporia. People cannot go about eliminating racism without constructing the notion of race, and the construction of race can only further racism.

This book made me realize again that today, for all of us, “race” is aporetic. It is so by its unavoidable nature as interim, by its being for the time being. One day—a future against which currently many groups fervently fight—interracial marriage globally will eliminate “race.” Eliminated will be the unavoidable first thought on meeting a stranger: “white,” “non-white,” “Asian,” “Amerindian,” “Pacific Islander.” Eliminated will be those euphemistic replacements for “race” that attract scholars in race studies: “population,” “people,” “ethnicity,” “color,” “diversity,” “differentiation,” “nonmainstream,” “minority.” Sub specie aeternitatis, so to speak, the subspecies, or more accurately, the varieties of Homo sapiens today called human “races,” will be no more. Color and pallor will blend into one. But until that phenotypic dispersal, we live racial aporias.

So to the point. Until then, any writing assessment shaped by anti-racism will still be racism or, if that term affronts, will be stuck in racial contradictions. Here are four racial aporias, subsets of the basic aporia expressed above, currently embedded in writing assessment and illustrated by Race and Writing Assessment, although not explicitly expressed there.

To correct racially aligned outcomes, writing assessment must apply benchmarks that are racially aligned. As a whole this book supports the notion that to “democratize” our classrooms (Nicole M. Merola, p.165), no racial group there should be “overrepresented” or “underrepresented” (Inoue, p. 84). Stratification by race, however, is not as straightforward as it appears. In 2007 at Juilliard, with student placement by SAT scores and educational background, enrollment into Fundamentals, the basic writing course, was 50% Asians and 14% whites. In 2010, after a change to an impromptu essay for placement, enrollment in Fundamentals was 41% Asian and 52% white (Anthony Lioi, p. 160). Lioi judges the change “more valid” because now the course population is “more representative of the general racial composition of the student body” (p. 161). This kind of benchmarking Lioi’s co-author Nicole M. Merola calls “isomorphism with the demographics” (p. 164). I won’t examine the unexamined assumption underlying this approach to validation of writing-assessment practices, that the percent of problem writers is the same in every racial group. But why the particular demographic chosen? Why not racial makeup of students applying, or of students graduating, or of USA population as a whole? More to the point, note that this racial isomorphism undercuts other kinds of isomorphism. Why isn’t Juilliard’s basic-writing placement system validated by isomorphic representation of social class, or nationality, or chosen academic major? There is some tacit discrimination going on. Why the effort to represent race evenly and not gender, for instance? Study after study has shown adolescent males performing worse than females on tests of verbal ability, and across the nation males are overrepresented in basic writing courses, yet I know of no professional interest in altering tests or writing placement systems to correct this misrepresentation. Naturally this book aligns itself with race. But as the poet Ai said, “The insistence that one must align oneself with this or that race is basically racist” (1980, p. 277).

Writing-assessment categorization by race erases the individual, yet it is only the individual that can erase race. As Ai continues, “And the notion that without a racial identity a person can’t have any identity perpetuates racism” (p. 277). My emphasis is on a person. Lioi and Merola’s chapter does not sustain a look at any individual student or any individual piece of writing. Nor do the majority of the other chapters (the two exceptions are studies of particular student essays by Anne Herrington & Sarah Stanley, and Zandra L. Jordan). What gets lost when the individual gets lost in discussions of race? For one thing, the notion of race as socially constructed. When “Latino” becomes an assessment category, effaced are persons who have been put in this category but whose individual qualities might well dispute the categorization—e.g., mother of Iberian heritage, father of Amerindian and African heritage. Not only do authors use “Latino” or “Hispanic” as a race category without raising the issue of the legitimacy of that categorization, I don’t remember one author in this collection even raising the issue of “mixed race.” It seems that when “racial formation” becomes the focus, race gets affirmed and the possible elimination of race—phenotypic dispersal—gets tabled. Ai is 1/2 Japanese, 1/8 Choctaw, 1/4 African American, and 1/16 Irish. She is the future, but she doesn’t fit in this book. In Florida’s new reading standards, which set a passing rate of 74% for blacks and 90% for Asian students, what would they do with Ai, or with any student who also happens to be part African and part Asian? The issue marks one spot where race studies in writing assessment need badly to catch up. Women’s studies have been exploring ways out of the trap of essentialism for three decades now.

Writing research into racial formations disregards individual agency but individual agency fuels racial formations. Sartre recounts a woman who hated Jews because a Jewish furrier had ruined one of her furs. Sartre famously asks why she chose to hate Jews and not furriers. What is usually not cited is Sartre’s next sentence: “Why Jews or furriers rather than such and such a Jew or such and such a furrier?” His answer is that she has a “passion,” a “predisposition toward anti-Semitism” (1943, pp. 11-12). In a word Sartre is describing prejudice, the individual, psychological dynamic that the old race studies have investigated for a century now and that still helps drive racial discrimination.  Race and Writing Assessment studiously ignores this dynamic. Inoue and Poe make quite clear why.  “Racial formation,” they say, is “not about prejudice, personal biases, or intent” but about “forces in history, local society, schools, and projects—such as writing programs” (p. 6). The new racism is perpetrated not by individuals but by institutions such as language, curricula, or assessment systems. Inoue and Poe’s contributors agree. They investigate how race is written into machine-scoring programs (Herrington & Stanley), how writing rubrics are mono-cultural (Balester), how a grading contract system favors certain races (Inoue), how African American English is viewed at historically black colleges (Jordan) and at traditionally Anglo graduate schools (Behm & Miller), how directed self-placement systems (Ketai) and standardized placement testing (Lioi & Merola, and Kathleen Blake Yancey) may further racism, and how topics in French school assessment disadvantage immigrant students (Bautier & Donahue). This is all good, in part because it provides a new understanding of racism as embedded and hidden in verbal and social structures.

Yet the erasure of classic hostile individual prejudice—and I can’t think of one specific example in this book—can’t be all good, too. Surely acts of individual racial prejudice have not entirely disappeared from the college writing-assessment scene. Kelly-Riley remarks that in essay-evaluation sessions, “Faculty raters or the other members of the rating community may unwittingly introduce silently held, negative beliefs” (p. 33). I wouldn’t put it so gently. It wasn’t that long ago that Jan and I recorded a composition teacher saying about the student author of an anonymous essay, “he might be a black student and is not probably used to looking at abstractions” (Haswell and Haswell, 1995, p. 245). More to the point, the department-sanctioned rubric, or the grading contract, or the directed self-placement instructions, or the computerized diagnostic program was written by individuals and is applied, essay by essay, by individual teachers and students. Should we let off the hook the individual agency that is still necessary for institutional racism to operate?

Writing scholars position themselves outside institutional racism to understand it but their understanding concludes that there is no outside. By virtue of their scholarly perspective, can writing scholars also be let off the hook? Nowhere in Race and Writing Assessment does any contributor note the possible contradiction between their opposition to institutionalized racism and their belief that institutionalized racism is everywhere—a contradiction, by the way, that has been fully explored in sociological race studies (see Robert Miles, 1989 and Howard Winant, 2005). Behm and Miller talk of the “ubiquity of racism and the hidden power relations that perpetuate it” (p. 135). Ketai assures us that “Race is woven throughout the fabric of placement testing and through conceptions of literacy and educational identity” (p. 145). None of them voice the possibility that this pervasiveness of racial formations might include their own relations, conceptions, and identities. Behm and Miller propose that students can extract themselves through critique (“critical race narratives”) and Ketai offers contextualization as a way to rescue directed self-placement. But they don’t entertain the high probability that critique and context will remain racialized. Nor do the editors note that their book, which repeatedly castigates the stylistic criterion of high academic English as a racial formation, is entirely written in high academic English. And far beyond the margins of Race and Writing Assessment lies the troubling contradiction, often expressed in the literature of racism (“The horror, the horror”), that immersion in the disease of racial inequality risks contamination.

Logically there is no escape from an aporia. As in the hermeneutic circle, where knowledge of the parts is defined by the whole and knowledge of the whole is determined by the parts, in scholarly circles knowledge of institutionalized racism is held by members of the racialized institution. Racial aporias will end only when race itself ends. Primo Levi’s readers sometimes asked him if he understood the level of racial hatred that created the Holocaust. Still remembered is his astonishing response. “No, I don’t understand it nor should you understand it,” he wrote, “it’s a sacred duty not to understand” (1965, p. 227). For to understand is to subsume.

Less remembered, however, is Levi’s continuation a page later: “We cannot understand it, but we can and must understand from where it springs, and must be on our guard. If understanding is impossible, knowing is imperative, because what happened could happen again” (p. 228). By “knowing” Levi meant “modest and less exciting truths, those one acquires painfully, little by little and without shortcuts, with study, discussion, and reasoning, those that can be verified and demonstrated” (p. 229). Levi is asking his readers not to forget that understanding may happen for once and forever but knowledge comes at different times and at different stages for different purposes.

This is why, despite what my own readers may be thinking, I applaud Race and Writing Assessment. Until race disappears, racism and racial formations will be with us, but in the interim, “little by little,” they can be exposed and ameliorated. In this book essay after essay ferrets out unfair writing-assessment practices. That takes courage, especially since some of the practices—such as writing rubrics, grading contracts, computerized evaluation, and directed self-placement—wield a hefty amount of professional esteem. And essay after essay shows more equitable outcomes when particular assessment practices are changed and applied. Scholarly study of racial effects have made the placement systems demonstrably less unfair at Washington State University (Kelly-Riley), the Juilliard School (Lioi), the Rhode Island School of Design (Merola), and Oregon State University (Yancey)—and study of teacher bias at the University of California, Merced and Fayette State University of North Carolina surely will improve teacher practice at those places in the future (Judy Fower & Robert Ochsner). This scholarship may involve Levi’s “modest and less exciting truths” and as I argue it may not have entirely extricated itself from race, but for certain individual students it has made writing assessment more just. During the millennia that it will take for human race to disappear, growth in racial justice is not impossible.

Let’s face it, though. In writing assessment, that growth will happen by short, incremental steps. As Chris Anson’s chapter makes abundantly clear, past compositionists have had an inclination to pretend their operations are free of race, constructed or not. As for future writing-assessment studies, the scholarly stare is hardly comparable to the Southern hate stare, but scholars must, as Levi cautions, constantly be on their guard. Kelly-Riley puts the situation honestly and exactly: “if classrooms are microcosms of our larger society—complete with problems of injustice and inequity—then it is not reasonable to think that all students or teachers or disciplines can be safeguarded against intentional or unintentional bias” (p. 32). But some can, as this book shows. This is the reason to congratulate the editors and contributors of Race and Writing Assessment.


Ai. (1980). Ai[Florence Anthony]. In F. C. Locher (Ed.), Contemporary authors. Detroit, MI: Gale Research.

Haswell, J., & Haswell, R. H. (1995). Gendership and the miswriting of students. College Composition and Communication, 46.2, 223-254.

Levi, P. (1965). The Awakening. Boston, MA: Little, Brown.

Miles, R. (1989). Racism. London: Routledge & Kegan Paul.

Sartre, J-P. (1948/1943). Anti-Semite and Jew: An exploration of the etiology of hate. New York: Schocken Books.

Winant, H. (2005). Race and racism: Overview.  In M. C. Horowitz (Ed.), New dictionary of the history of ideas. Vol. 5. Detroit, MI: Charles Scribner’s Sons.

CFP: Responses to Common Core State Standards, Smarter Balanced Assessment Consortium and Partnership for Assessment of Readiness for College and Career

The Journal of Writing Assessment is interested in scholars’ response to the writing assessments connected with the Common Core State Standards ( that are in development. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the assessments, including approach, use of technology, and sample items. While it is too early to have any full-fledged-research about the specific writing assessments, theoretical discussions and critical reviews of material released from SBAC (  and PARCC ( are welcome.

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

For more information, and for submission guidelines, visit JWA online

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

By Donna Evans, Eastern Oregon University

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press. 174 pgs.

In this text, co-authors from five different institutions have answered Broad’s call “to move beyond traditional assessment practices that over-simplify learning, teaching, and assessment, and to ‘embrace the life of things’” (p. 5). Relying primarily on Dynamic Criteria Mapping (DCM) methodology, first described by Broad in What We Really Value (2003), each project is designed to be rhetorically responsive to a unique institutional audience and investigational purpose. As a result, the processes, products, and analyses they report support the premise that what writing assessment experts increasingly value—locally grown, organic assessment—can be brought to fruition and yield bumper harvests of usable data.

Interestingly, the authors drafted their text in a dynamic form about a dynamic process. Broad supplies the first and last chapters, interchapters appear between most chapters, co-authors embed paragraph-long comments within the text of Chapters 2 through 6, and Broad is referenced throughout, creating actant (a force for change) traces of network structure. With disparate researchers and studies coming together to achieve a like purpose of revealing DCM, then falling away to become unique entities when the purpose has been served, I see this text working as an actor network (Latour, 2007). Together, actors and actants exert strength of purpose in support of DCM. This form is apparent in the paperback and in the Adobe Digital Edition, but the layout of the Kindle edition obscures the elegance of the authors’ dialogues.

This text tells research stories useful to anyone interested in shaping assessment tools in local contexts, whether in classrooms, programs, departments, or across institutions. While DCM is primarily aimed at writing assessment, other uses are evident in the text’s inclusion of critical thinking and learning across the curriculum assessment. Some early reviewers perceived DCM as another approach to traditional rubrics, and Broad’s co-authors also express concern that their processes have slipped toward rubrics. But Broad dispels them, reaffirming that local ownership accounts for variation in authentic DCM models. As a reader, I agree and have already begun planning assessment projects using the DCM process.

Broad reviews the theoretical foundation of DCM in Chapter 1. He writes, “Inspired by Guba and Lincoln’s Fourth Generation Evaluation (1989) and Glaser and Strauss’s grounded theory (1967), the DCM approach promotes inductive (democratic) and empirical (ethnographic) methods for generating accurate and useful accounts of what faculty and administrators value in their students’ work” (p. 5). Because I have used Guba and Lincoln’s methods to gather quantitative and qualitative data in my own research, DCM seems intuitive, a natural extension of proven procedures. Some reviewers of Broad’s earlier book saw DCM as too labor intensive, impracticable, and just another approach to traditional rubrics (p. 5). An important distinction of DCM, observed by Belanoff and Denny (2006), is “‘that [such a rubric] will be applicable only within the context in which it is created’ (135)” (pp. 5-6). However, the five DCM projects presented in Organic Writing Assessment show that the flexible, home-grown application of DCM makes good use of time and labor, and produces usable criteria maps that occasionally include rubrics. These models show that DCM is doable, and that, while the first purpose is to create home-grown assessment, the process is transferable across institutional and departmental boundaries. And while Broad’s co-authors express concern that their maps are too close to rubrics to be authentic DCM models, Broad assures them that they are “not only ‘legitimate’ practitioners of DCM but also pioneers of the next generation of praxis in large-scale writing assessment and faculty professional development” (p. 12). You can preview this introduction here.

In Chapter 2, Linda Adler-Kastner and Heidi Estrem discuss their DCM approach with a programmatic assessment of English 121, a required general education writing course at Eastern Michigan University. Students reported increased confidence with writing from beginning to end of the course, part of a two-year writing sequence focusing on place and genre. But administrators wanted to know what experts—not only students—said about students’ writing. In response, the authors employed a DCM protocol that evolved to include focus groups made up of students, faculty, staff, and administrators. Results of this DCM assessment process have influenced professional development and curriculum trajectories, generated interest among writing program administrators, and provided data to support the program. In my opinion, such robust generation of rich data makes DCM worthy of consideration.

Barry Alford of Mid Michigan Community College (MMCC)—the only two-year college represented in the text—explains in Chapter 3 that his colleagues view DCM as an acceptable institutional assessment method. This project is particularly interesting because it is aimed at opening up conversation among disciplinary faculty and uncovering information useful for teaching among faculty with heavy teaching loads and separated by disparate educational goals. Alford writes that differences among faculty in such environments “are so extreme that many institutions avoid even trying to assess common student outcomes” (p. 37). But by relying on already expressed values and existing student work, Alford and the MMCC faculty used DCM to uncover concepts hidden behind seemingly unrelated disciplinary content and student projects. Their process led to creation of a map with three criteria: 1) working from multiple perspectives; 2) application; and 3) communication and presentation skills (p. 42). Disciplinary faculty were then asked to identify where and how these valued criteria were measured in their courses.

In focusing upon student improvement rather than upon testing instruments, the MMCC dynamic criteria map moves the institution away from a compliance model, the dominant form of assessment at the community college level. I find this example intriguing because it exemplifies the potential of a bottom-up assessment method to inform institutional values, invite interdisciplinary conversation and collaboration, and, most importantly, benefit students. Also, by beginning with the institution’s expressed values and going beyond (or behind) them to identify concepts, Alford has shown that the work of developing a dynamic criteria map does not have to begin at ground zero.

In Chapter 4, Jane Detweiler and Maureen McBride of the University of Nevada, Reno (UNR) discuss DCM in vertical assessment of first year writing and critical thinking. In anticipation of faculty resistance to heavy time commitment, students were interned to facilitate assessment. Detweiler, McBride, and six interns received low survey participation, but the DCM process continued with focus groups comprised of instructors who were asked to create movie posters depicting their assessment concerns, followed by lists of values.

The UNR team developed a star-shaped assessment model with numerical values along its arms for scoring, yielding statistically significant data. The map was accompanied by a scoring guide (a matrix with teacher-generated descriptors) and a comment sheet (space for three entries related to issues noticed but not scored on the map, and three entries related to issues that had been scored) (p. 66). This DCM process, including qualitative and quantitative research, has influenced UNR’s teacher preparation and continued assessment, providing a means of “closing the loop.” I find UNR’s map to be an accessible, usable assessment tool. During portfolio assessment, dots assigned numerical values are connected across arms to create visual images that can be quickly interpreted and sorted. The map also provides space for comments on criteria that might be included in a later iteration of the assessment map.

In Chapter 5, Susanmarie Harrington and Scott Weeden at Indiana University Purdue University Indianapolis (IUPUI) tell how changes in the writing program’s faculty plus motivation to revise course goals and teaching approaches had increased tensions in the department. In “address[ing] the failings in rubrics” that allow a single grade or adjective to represent complex ideas, Harrington and Weeden led writing faculty to seek detail through DCM (p. 78). Their process evolved to include discussion of sample portfolios, analysis and clustered terms that had been recorded during discussion, data presentation by way of document production, creation of a dynamic rubric, and application of the resulting dynamic rubric in teaching and grading (p. 82). The resulting descriptors were catalogued under three headings—high (above passing), medium (passing), and low (below passing)—and called an “UnRubric” guide to assessing “variety in performance within common values” rather than serving as a compliance instrument (p. 96). The authors point out that the language of the UnRubric promotes assessment based on qualities apparent in student writing rather than by degree of compliance with requirements. Harrington and Weeden reported that the DCM process reduced discontent with the curriculum (p. 95). IUPUI’s successful collaborative discussion of the DCM process among faculty, plus similar successes within other institutions and programs, suggests to me as a WAC director that the process is worth trying for devising assessment instruments and consensus building.

In the final DCM project in this book (Chapter 6), Eric Stalions presents his work while a graduate student at Bowling Green State University. His purpose was to develop a qualitative and quantitative research approach to assessing placement decisions in the General Studies Writing program, and to “close the loop” between assessment and curriculum. Working with transcripts of four placement evaluator pairs and the coordinator’s program training and documents, Stalions developed a dynamic criteria map for each of three placement options. He explored evaluative criteria found in collected data that had not been described in existing program placement criteria, and observed that placement readers “expressed…a desire to be persuaded” in their assessment decisions (p. 136).

Stalions suggests that criteria used frequently by placement evaluators, but not included in assessment values, should be discussed and articulated to affect course assessment and curriculum. This is somewhat like returning to a played-out placer bed and panning for smaller flakes of gold left behind or ignored in the initial process. The newly discovered flakes are just as precious as those that came before. Similarly, criteria found in the DCM process are valuable, perhaps critical, to assessing the whole value of a piece of student writing and influencing teaching practices. The refinement of known and newly discovered values adds currency to institutional placement assessment and pedagogical aims.

Broad returns in Chapter 7 to summarize, to synthesize DCM processes, and to query what has been learned. He also respectfully objects to Brian Huot’s 2008 call at the Conference on College Composition and Communication for government regulation of writing assessment, asking instead whether organic assessment through DCM might change the face of higher education. While Broad agrees that government oversight of the testing industry is needed, he argues that home-grown assessment like DCM processes may be the answer. I mostly agree with Broad; however, I do not see DCM as a panacea that fits into all institutional environments. However, from the projects collected in Organic Writing Assessment, it is clear that DCM has only begun to seed itself across academia and that much can be expected from its widespread planting.


Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press.

Guba, Egon G., and Yvonna S. Lincoln. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.

Latour, Bruno. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory. New York, NY: Oxford University Press.

Part I: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Part I:  Review of Norbert Elliot’s and Les Perelman’s (Eds). Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing assessment in the 21st century:  Essays in honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee

Writing Assessment inthe 21st Century: Essays in Honor of Edward M. White is written as “a tribute in [Ed White’s] honor. In this testament to White’s ability to work across disciplinary boundaries, the collection is also a documentary, broadly conceived, of the states of writing assessment practice in the early 21st century” (p. 2). That emphasis on interdisciplinary collaboration to develop ethical assessment methods is evident throughout the introduction and book as a whole. It is also, Norbert Elliot and Les Perelman argue, one of White’s significant contributions to the field.

Elliot and Perelman explain how Writing Assessment developed out of a celebration on the 25th anniversary of Ed White’s Teaching and Assessing Writing at the 2010 Conference on College Composition and Communication and the subsequent open-source Web site dedicated to collaboration among contributors “to document the state of practice of writing assessment in the early 21st century” (p. 12). Most generally, Writing Assessment in the 21st Century traces the history of writing assessment to provide readers with an understanding of the field and suggestions for where we might head in the future.

As a PhD candidate in Rhetoric and Composition with research areas in composition pedagogy, multilingual writing, and writing assessment, I find the book helpful in a number of ways. I appreciate seeing White’s call to encourage interdisciplinarity within writing assessment in action, as Writing Assessment’s 35 chapters include familiar names in writing assessment and composition studies (including this journal’s editors) – as well as directors of the National Writing Project, Educational Testing Service (ETS), writing-across-the-curriculum programs, federal governmental agencies, and scholars in technical communication and second language writing.

Because it is a hefty tome – over 500 pages – I will review Writing Assessment in the 21st Century in a series of posts. The first (this one) will consider the first of Writing Assessment’s four sections, and will be followed by individual posts for each section along with a final post to discuss the book as a whole. Part I: “The Landscape of Contemporary Writing Assessment” helps situate readers and demonstrates the breadth of writing assessment as it addresses how shifts within the field have come to influence our practices as educators and assessors of writing.

The result is refreshing: As I read the first section, I felt comfortable (“Oh, I recognize this idea!”) and challenged (“Wait, there’s more to understand the Harvard Entrance Exams than we’ve written about in the past hundred plus years?”). Sherry Seale Swain and Paul Le Mahieu’s “Assessment in a Culture of Inquiry,” for example, discuss how the National Writing Project created the Analytic Writing Continuum as “an opportunity to explore the potential of assessment that is locally contextualized yet linked to a common national framework and standards of performance” by including K-16 teachers, researchers, and educational testing experts (p. 46). In this sense, the book affirms White’s position on writing assessment; Swain and LeMahieu document the positive results that occur when we collaborate across disciplinary boundaries.

Margaret Hundleby’s chapter, “The Questions of Assessment in Technical and Professional Communication,” raises many questions for me, someone who has had jobs but no coursework in technical and professional communication (TPC). Hundleby presents new ideas of validity to me as she describes dominant methods of TPC assessment in the post-World War II era, where scholars “[used] measurement to demonstrate both that the communication products could be relied on, and that the communicator was valid, or fully professional” (p.119). What does it mean to be “fully professional?” How might assessments in composition studies change if we used that form of validity? How does it affect a piece of TPC writing?

Similarly, chapters by ETS researchers cause me to ask new questions, particularly in light of my first experience as an AP exam reader this summer. In “Rethinking K-12 Writing Assessment” by Paul Deane, he states: “We start by considering writing as a construct, viewed both socially and cognitively in terms of our competency model,” which initially raised some flags for me – how can we begin with assessing students’ competencies, particularly in a standardized exam (p. 90)? But the chapter encouraged me to be more open-minded about education testing companies, too, as I realized Deane and ETS value writing as situated in local contexts, reflecting cultural practices (p. 88 and 97) and assessment as a method to reflect upon and improve teaching (p. 95). I still need to be convinced on the benefits of automated scoring, but Writing Assessment allows me to read ideas and research from a broader spectrum than I might ordinarily, and to realize we writing assessment folks share many core values.

Next: Part II: “Strategies in Contemporary Writing Assessment”

JWA at IWAC conference in Savannah

The Journal of Writing Assessment wanted to give a special thanks to some people who helped promote JWA at the recent International Writing Across the Curriculum conference in Savannah, Georgia.

First, thanks to Nick Carbone of Bedford/St. Martin’s who gave us space at his table to distribute our fliers.  This space was centrally located and was in a high traffic flow area of the conference.  Thank you so much, Nick, for your support!

Secondly, we want to thank Twenty Six LLC for featuring JWA on their banner as part of the portfolio of their work. Twenty Six LLC designed and hosts the JWA website and we really appreciate their excellent work!

Thanks again!  The IWAC conference was excellent and there were many sessions that focused on issues of writing assessment.  We welcome submissions from this conference to JWA!

Diane Kelly-Riley and Peggy O’Neill, Editors

Susan Callahan’s review of George Hillock’s _The Testing Trap: How State Writing Assessments Control Learning_.

Here is another review from the Journal of Writing Assessment’s archives:

Please read Susan Callahan’s review:  “Testing the tests” from Volume 2 Number 1 of the Journal of Writing Assessment from Spring 2005.

Callahan reviews George Hillock’s The testing trap:  How state writing assessments control learning. New York:  Teachers College Press,   Pub Date: April 2002, 240 pages Paperback: $23.95, ISBN: 0807742295 Cloth: $54, ISBN: 0807742309