Writing Assessment and Race Studies Sub Specie Aeternitatis: A Response to _Race and Writing Assessment_

Writing Assessment and Race Studies Sub Specie Aeternitatis: A Response to Race and Writing Assessment

By Rich Haswell, Haas Professor of English Emeritus, Texas A&M University, Corpus Christi

If understanding is impossible, knowing is imperative.

—Primo Levi

I was filling up my car when I noticed my traveling companion staring at a man in the next row of pumps, staring with fixed malevolence. The man was a stranger. He was also black. With alarm I realized my friend was executing the Southern hate stare. I had never witnessed a hate stare before, only having read about it in John Howard Griffin’s book Black Like Me. I was dumbfounded. We were in the middle of Wyoming, and my companion, a generation older than me, had lived nowhere east of Great Basin country. I had never seen him betray a shred of racism before. The stranger studiously disregarded the stare, finished filling up his car, and drove away. We drove away. Appropriately enough, the station we left was that cultural mix of neon, bad food, fuel, tourists, and truck drivers called Little America.

This incident happened thirty-seven years ago. It was the kind of racism—hateful, publicly hostile, of a kin with sunset laws and burning crosses—that we all pray a larger America has left behind. Racism itself, of course, we have not left behind. It has just diversified. A “new racism” is still present in a legion of forms: structural racism, institutionalized racism, benevolent racism, color-blind racism, scientific racism, culturalist racism, internalized racism, ethnopluralist racism, whitely racism, everyday racism, microaggressive racism—racisms more or less covert, more or less hateful. These terms are the fruit of the new race studies, which have emerged hand in hand with the new racism. Race studies study race in order to understand it better, in order to find ways to deal with people of all races with sympathy and equity, in order, eventually, it hopes, to eliminate racial inequalities everywhere. Equality, of course, may not exactly fit some of the ways that the study of race applies its findings—think, for instance, of affirmative action. So recently Florida’s state board of education set grade-level test goals in reading at 90% for Asian students, 88% for whites, 81% for Hispanics, and 74% for blacks. The members of the Florida board of education do not hate Asian students in setting their particular bar so high but rather hope that whites, Hispanics, and blacks, given time, will catch up.

In doing so, of course, the board affirmed the existence of race. Herein lies a major contradiction in the new race studies. Often they start with critique (“critical race theory”) that argues “race” is a figment, a social, political, and ideological construction, and then they end by affirming and even celebrating racial categories and groups. A common line of argument first deconstructs “race,” then attacks “racism,” and then confirms the equality of all races.

The contradiction runs through Race and Writing Assessment, edited by Asao B. Inoue and Mya Poe (Peter Lang, 2012). At the top of the book, Inoue and Poe state the current truism that “race” is “artificial,” not biological but rather a “social and political construction” (4). And indeed the book tends not to use the terms “racism” and “racist” at all,  preferring expressions such as “racialism,” “racialization,” “raciology,” and “racial formation” (exceptions are chapters by Valerie Balester, Nicholas Behm & Keith D. Miller, and Rachel Lewis Ketai). Yet over and over, as the book furthers projects and programs in writing assessment, it treats race as an objective reality, just as Florida’s board of education has done. In fact an early chapter in the book asks that registrars of USA universities record student’s race (Diane Kelly-Riley), and the last chapter wishes that France’s laws forbidding categorization by racial groups be abeyed for academic scholars (Élizabeth Bautier & Christiane Donahue). Despite Inoue’s start, nowhere afterward does the book deconstruct race. Puzzled, I ask a transgressive question. Does Race and Writing Assessment, in its commendable efforts to act with more fairness toward students of every color, help maintain racism? Let me state my question as a logical aporia. People cannot go about eliminating racism without constructing the notion of race, and the construction of race can only further racism.

This book made me realize again that today, for all of us, “race” is aporetic. It is so by its unavoidable nature as interim, by its being for the time being. One day—a future against which currently many groups fervently fight—interracial marriage globally will eliminate “race.” Eliminated will be the unavoidable first thought on meeting a stranger: “white,” “non-white,” “Asian,” “Amerindian,” “Pacific Islander.” Eliminated will be those euphemistic replacements for “race” that attract scholars in race studies: “population,” “people,” “ethnicity,” “color,” “diversity,” “differentiation,” “nonmainstream,” “minority.” Sub specie aeternitatis, so to speak, the subspecies, or more accurately, the varieties of Homo sapiens today called human “races,” will be no more. Color and pallor will blend into one. But until that phenotypic dispersal, we live racial aporias.

So to the point. Until then, any writing assessment shaped by anti-racism will still be racism or, if that term affronts, will be stuck in racial contradictions. Here are four racial aporias, subsets of the basic aporia expressed above, currently embedded in writing assessment and illustrated by Race and Writing Assessment, although not explicitly expressed there.

To correct racially aligned outcomes, writing assessment must apply benchmarks that are racially aligned. As a whole this book supports the notion that to “democratize” our classrooms (Nicole M. Merola, p.165), no racial group there should be “overrepresented” or “underrepresented” (Inoue, p. 84). Stratification by race, however, is not as straightforward as it appears. In 2007 at Juilliard, with student placement by SAT scores and educational background, enrollment into Fundamentals, the basic writing course, was 50% Asians and 14% whites. In 2010, after a change to an impromptu essay for placement, enrollment in Fundamentals was 41% Asian and 52% white (Anthony Lioi, p. 160). Lioi judges the change “more valid” because now the course population is “more representative of the general racial composition of the student body” (p. 161). This kind of benchmarking Lioi’s co-author Nicole M. Merola calls “isomorphism with the demographics” (p. 164). I won’t examine the unexamined assumption underlying this approach to validation of writing-assessment practices, that the percent of problem writers is the same in every racial group. But why the particular demographic chosen? Why not racial makeup of students applying, or of students graduating, or of USA population as a whole? More to the point, note that this racial isomorphism undercuts other kinds of isomorphism. Why isn’t Juilliard’s basic-writing placement system validated by isomorphic representation of social class, or nationality, or chosen academic major? There is some tacit discrimination going on. Why the effort to represent race evenly and not gender, for instance? Study after study has shown adolescent males performing worse than females on tests of verbal ability, and across the nation males are overrepresented in basic writing courses, yet I know of no professional interest in altering tests or writing placement systems to correct this misrepresentation. Naturally this book aligns itself with race. But as the poet Ai said, “The insistence that one must align oneself with this or that race is basically racist” (1980, p. 277).

Writing-assessment categorization by race erases the individual, yet it is only the individual that can erase race. As Ai continues, “And the notion that without a racial identity a person can’t have any identity perpetuates racism” (p. 277). My emphasis is on a person. Lioi and Merola’s chapter does not sustain a look at any individual student or any individual piece of writing. Nor do the majority of the other chapters (the two exceptions are studies of particular student essays by Anne Herrington & Sarah Stanley, and Zandra L. Jordan). What gets lost when the individual gets lost in discussions of race? For one thing, the notion of race as socially constructed. When “Latino” becomes an assessment category, effaced are persons who have been put in this category but whose individual qualities might well dispute the categorization—e.g., mother of Iberian heritage, father of Amerindian and African heritage. Not only do authors use “Latino” or “Hispanic” as a race category without raising the issue of the legitimacy of that categorization, I don’t remember one author in this collection even raising the issue of “mixed race.” It seems that when “racial formation” becomes the focus, race gets affirmed and the possible elimination of race—phenotypic dispersal—gets tabled. Ai is 1/2 Japanese, 1/8 Choctaw, 1/4 African American, and 1/16 Irish. She is the future, but she doesn’t fit in this book. In Florida’s new reading standards, which set a passing rate of 74% for blacks and 90% for Asian students, what would they do with Ai, or with any student who also happens to be part African and part Asian? The issue marks one spot where race studies in writing assessment need badly to catch up. Women’s studies have been exploring ways out of the trap of essentialism for three decades now.

Writing research into racial formations disregards individual agency but individual agency fuels racial formations. Sartre recounts a woman who hated Jews because a Jewish furrier had ruined one of her furs. Sartre famously asks why she chose to hate Jews and not furriers. What is usually not cited is Sartre’s next sentence: “Why Jews or furriers rather than such and such a Jew or such and such a furrier?” His answer is that she has a “passion,” a “predisposition toward anti-Semitism” (1943, pp. 11-12). In a word Sartre is describing prejudice, the individual, psychological dynamic that the old race studies have investigated for a century now and that still helps drive racial discrimination.  Race and Writing Assessment studiously ignores this dynamic. Inoue and Poe make quite clear why.  “Racial formation,” they say, is “not about prejudice, personal biases, or intent” but about “forces in history, local society, schools, and projects—such as writing programs” (p. 6). The new racism is perpetrated not by individuals but by institutions such as language, curricula, or assessment systems. Inoue and Poe’s contributors agree. They investigate how race is written into machine-scoring programs (Herrington & Stanley), how writing rubrics are mono-cultural (Balester), how a grading contract system favors certain races (Inoue), how African American English is viewed at historically black colleges (Jordan) and at traditionally Anglo graduate schools (Behm & Miller), how directed self-placement systems (Ketai) and standardized placement testing (Lioi & Merola, and Kathleen Blake Yancey) may further racism, and how topics in French school assessment disadvantage immigrant students (Bautier & Donahue). This is all good, in part because it provides a new understanding of racism as embedded and hidden in verbal and social structures.

Yet the erasure of classic hostile individual prejudice—and I can’t think of one specific example in this book—can’t be all good, too. Surely acts of individual racial prejudice have not entirely disappeared from the college writing-assessment scene. Kelly-Riley remarks that in essay-evaluation sessions, “Faculty raters or the other members of the rating community may unwittingly introduce silently held, negative beliefs” (p. 33). I wouldn’t put it so gently. It wasn’t that long ago that Jan and I recorded a composition teacher saying about the student author of an anonymous essay, “he might be a black student and is not probably used to looking at abstractions” (Haswell and Haswell, 1995, p. 245). More to the point, the department-sanctioned rubric, or the grading contract, or the directed self-placement instructions, or the computerized diagnostic program was written by individuals and is applied, essay by essay, by individual teachers and students. Should we let off the hook the individual agency that is still necessary for institutional racism to operate?

Writing scholars position themselves outside institutional racism to understand it but their understanding concludes that there is no outside. By virtue of their scholarly perspective, can writing scholars also be let off the hook? Nowhere in Race and Writing Assessment does any contributor note the possible contradiction between their opposition to institutionalized racism and their belief that institutionalized racism is everywhere—a contradiction, by the way, that has been fully explored in sociological race studies (see Robert Miles, 1989 and Howard Winant, 2005). Behm and Miller talk of the “ubiquity of racism and the hidden power relations that perpetuate it” (p. 135). Ketai assures us that “Race is woven throughout the fabric of placement testing and through conceptions of literacy and educational identity” (p. 145). None of them voice the possibility that this pervasiveness of racial formations might include their own relations, conceptions, and identities. Behm and Miller propose that students can extract themselves through critique (“critical race narratives”) and Ketai offers contextualization as a way to rescue directed self-placement. But they don’t entertain the high probability that critique and context will remain racialized. Nor do the editors note that their book, which repeatedly castigates the stylistic criterion of high academic English as a racial formation, is entirely written in high academic English. And far beyond the margins of Race and Writing Assessment lies the troubling contradiction, often expressed in the literature of racism (“The horror, the horror”), that immersion in the disease of racial inequality risks contamination.

Logically there is no escape from an aporia. As in the hermeneutic circle, where knowledge of the parts is defined by the whole and knowledge of the whole is determined by the parts, in scholarly circles knowledge of institutionalized racism is held by members of the racialized institution. Racial aporias will end only when race itself ends. Primo Levi’s readers sometimes asked him if he understood the level of racial hatred that created the Holocaust. Still remembered is his astonishing response. “No, I don’t understand it nor should you understand it,” he wrote, “it’s a sacred duty not to understand” (1965, p. 227). For to understand is to subsume.

Less remembered, however, is Levi’s continuation a page later: “We cannot understand it, but we can and must understand from where it springs, and must be on our guard. If understanding is impossible, knowing is imperative, because what happened could happen again” (p. 228). By “knowing” Levi meant “modest and less exciting truths, those one acquires painfully, little by little and without shortcuts, with study, discussion, and reasoning, those that can be verified and demonstrated” (p. 229). Levi is asking his readers not to forget that understanding may happen for once and forever but knowledge comes at different times and at different stages for different purposes.

This is why, despite what my own readers may be thinking, I applaud Race and Writing Assessment. Until race disappears, racism and racial formations will be with us, but in the interim, “little by little,” they can be exposed and ameliorated. In this book essay after essay ferrets out unfair writing-assessment practices. That takes courage, especially since some of the practices—such as writing rubrics, grading contracts, computerized evaluation, and directed self-placement—wield a hefty amount of professional esteem. And essay after essay shows more equitable outcomes when particular assessment practices are changed and applied. Scholarly study of racial effects have made the placement systems demonstrably less unfair at Washington State University (Kelly-Riley), the Juilliard School (Lioi), the Rhode Island School of Design (Merola), and Oregon State University (Yancey)—and study of teacher bias at the University of California, Merced and Fayette State University of North Carolina surely will improve teacher practice at those places in the future (Judy Fower & Robert Ochsner). This scholarship may involve Levi’s “modest and less exciting truths” and as I argue it may not have entirely extricated itself from race, but for certain individual students it has made writing assessment more just. During the millennia that it will take for human race to disappear, growth in racial justice is not impossible.

Let’s face it, though. In writing assessment, that growth will happen by short, incremental steps. As Chris Anson’s chapter makes abundantly clear, past compositionists have had an inclination to pretend their operations are free of race, constructed or not. As for future writing-assessment studies, the scholarly stare is hardly comparable to the Southern hate stare, but scholars must, as Levi cautions, constantly be on their guard. Kelly-Riley puts the situation honestly and exactly: “if classrooms are microcosms of our larger society—complete with problems of injustice and inequity—then it is not reasonable to think that all students or teachers or disciplines can be safeguarded against intentional or unintentional bias” (p. 32). But some can, as this book shows. This is the reason to congratulate the editors and contributors of Race and Writing Assessment.


Ai. (1980). Ai[Florence Anthony]. In F. C. Locher (Ed.), Contemporary authors. Detroit, MI: Gale Research.

Haswell, J., & Haswell, R. H. (1995). Gendership and the miswriting of students. College Composition and Communication, 46.2, 223-254.

Levi, P. (1965). The Awakening. Boston, MA: Little, Brown.

Miles, R. (1989). Racism. London: Routledge & Kegan Paul.

Sartre, J-P. (1948/1943). Anti-Semite and Jew: An exploration of the etiology of hate. New York: Schocken Books.

Winant, H. (2005). Race and racism: Overview.  In M. C. Horowitz (Ed.), New dictionary of the history of ideas. Vol. 5. Detroit, MI: Charles Scribner’s Sons.

CFP: Responses to Common Core State Standards, Smarter Balanced Assessment Consortium and Partnership for Assessment of Readiness for College and Career

The Journal of Writing Assessment is interested in scholars’ response to the writing assessments connected with the Common Core State Standards (www.corestandards.org) that are in development. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the assessments, including approach, use of technology, and sample items. While it is too early to have any full-fledged-research about the specific writing assessments, theoretical discussions and critical reviews of material released from SBAC (http://www.smarterbalanced.org/)  and PARCC (http://www.parcconline.org/) are welcome.

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

For more information, and for submission guidelines, visit JWA online http://www.journalofwritingassessment.org/.

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

By Donna Evans, Eastern Oregon University

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press. 174 pgs.

In this text, co-authors from five different institutions have answered Broad’s call “to move beyond traditional assessment practices that over-simplify learning, teaching, and assessment, and to ‘embrace the life of things’” (p. 5). Relying primarily on Dynamic Criteria Mapping (DCM) methodology, first described by Broad in What We Really Value (2003), each project is designed to be rhetorically responsive to a unique institutional audience and investigational purpose. As a result, the processes, products, and analyses they report support the premise that what writing assessment experts increasingly value—locally grown, organic assessment—can be brought to fruition and yield bumper harvests of usable data.

Interestingly, the authors drafted their text in a dynamic form about a dynamic process. Broad supplies the first and last chapters, interchapters appear between most chapters, co-authors embed paragraph-long comments within the text of Chapters 2 through 6, and Broad is referenced throughout, creating actant (a force for change) traces of network structure. With disparate researchers and studies coming together to achieve a like purpose of revealing DCM, then falling away to become unique entities when the purpose has been served, I see this text working as an actor network (Latour, 2007). Together, actors and actants exert strength of purpose in support of DCM. This form is apparent in the paperback and in the Adobe Digital Edition, but the layout of the Kindle edition obscures the elegance of the authors’ dialogues.

This text tells research stories useful to anyone interested in shaping assessment tools in local contexts, whether in classrooms, programs, departments, or across institutions. While DCM is primarily aimed at writing assessment, other uses are evident in the text’s inclusion of critical thinking and learning across the curriculum assessment. Some early reviewers perceived DCM as another approach to traditional rubrics, and Broad’s co-authors also express concern that their processes have slipped toward rubrics. But Broad dispels them, reaffirming that local ownership accounts for variation in authentic DCM models. As a reader, I agree and have already begun planning assessment projects using the DCM process.

Broad reviews the theoretical foundation of DCM in Chapter 1. He writes, “Inspired by Guba and Lincoln’s Fourth Generation Evaluation (1989) and Glaser and Strauss’s grounded theory (1967), the DCM approach promotes inductive (democratic) and empirical (ethnographic) methods for generating accurate and useful accounts of what faculty and administrators value in their students’ work” (p. 5). Because I have used Guba and Lincoln’s methods to gather quantitative and qualitative data in my own research, DCM seems intuitive, a natural extension of proven procedures. Some reviewers of Broad’s earlier book saw DCM as too labor intensive, impracticable, and just another approach to traditional rubrics (p. 5). An important distinction of DCM, observed by Belanoff and Denny (2006), is “‘that [such a rubric] will be applicable only within the context in which it is created’ (135)” (pp. 5-6). However, the five DCM projects presented in Organic Writing Assessment show that the flexible, home-grown application of DCM makes good use of time and labor, and produces usable criteria maps that occasionally include rubrics. These models show that DCM is doable, and that, while the first purpose is to create home-grown assessment, the process is transferable across institutional and departmental boundaries. And while Broad’s co-authors express concern that their maps are too close to rubrics to be authentic DCM models, Broad assures them that they are “not only ‘legitimate’ practitioners of DCM but also pioneers of the next generation of praxis in large-scale writing assessment and faculty professional development” (p. 12). You can preview this introduction here.

In Chapter 2, Linda Adler-Kastner and Heidi Estrem discuss their DCM approach with a programmatic assessment of English 121, a required general education writing course at Eastern Michigan University. Students reported increased confidence with writing from beginning to end of the course, part of a two-year writing sequence focusing on place and genre. But administrators wanted to know what experts—not only students—said about students’ writing. In response, the authors employed a DCM protocol that evolved to include focus groups made up of students, faculty, staff, and administrators. Results of this DCM assessment process have influenced professional development and curriculum trajectories, generated interest among writing program administrators, and provided data to support the program. In my opinion, such robust generation of rich data makes DCM worthy of consideration.

Barry Alford of Mid Michigan Community College (MMCC)—the only two-year college represented in the text—explains in Chapter 3 that his colleagues view DCM as an acceptable institutional assessment method. This project is particularly interesting because it is aimed at opening up conversation among disciplinary faculty and uncovering information useful for teaching among faculty with heavy teaching loads and separated by disparate educational goals. Alford writes that differences among faculty in such environments “are so extreme that many institutions avoid even trying to assess common student outcomes” (p. 37). But by relying on already expressed values and existing student work, Alford and the MMCC faculty used DCM to uncover concepts hidden behind seemingly unrelated disciplinary content and student projects. Their process led to creation of a map with three criteria: 1) working from multiple perspectives; 2) application; and 3) communication and presentation skills (p. 42). Disciplinary faculty were then asked to identify where and how these valued criteria were measured in their courses.

In focusing upon student improvement rather than upon testing instruments, the MMCC dynamic criteria map moves the institution away from a compliance model, the dominant form of assessment at the community college level. I find this example intriguing because it exemplifies the potential of a bottom-up assessment method to inform institutional values, invite interdisciplinary conversation and collaboration, and, most importantly, benefit students. Also, by beginning with the institution’s expressed values and going beyond (or behind) them to identify concepts, Alford has shown that the work of developing a dynamic criteria map does not have to begin at ground zero.

In Chapter 4, Jane Detweiler and Maureen McBride of the University of Nevada, Reno (UNR) discuss DCM in vertical assessment of first year writing and critical thinking. In anticipation of faculty resistance to heavy time commitment, students were interned to facilitate assessment. Detweiler, McBride, and six interns received low survey participation, but the DCM process continued with focus groups comprised of instructors who were asked to create movie posters depicting their assessment concerns, followed by lists of values.

The UNR team developed a star-shaped assessment model with numerical values along its arms for scoring, yielding statistically significant data. The map was accompanied by a scoring guide (a matrix with teacher-generated descriptors) and a comment sheet (space for three entries related to issues noticed but not scored on the map, and three entries related to issues that had been scored) (p. 66). This DCM process, including qualitative and quantitative research, has influenced UNR’s teacher preparation and continued assessment, providing a means of “closing the loop.” I find UNR’s map to be an accessible, usable assessment tool. During portfolio assessment, dots assigned numerical values are connected across arms to create visual images that can be quickly interpreted and sorted. The map also provides space for comments on criteria that might be included in a later iteration of the assessment map.

In Chapter 5, Susanmarie Harrington and Scott Weeden at Indiana University Purdue University Indianapolis (IUPUI) tell how changes in the writing program’s faculty plus motivation to revise course goals and teaching approaches had increased tensions in the department. In “address[ing] the failings in rubrics” that allow a single grade or adjective to represent complex ideas, Harrington and Weeden led writing faculty to seek detail through DCM (p. 78). Their process evolved to include discussion of sample portfolios, analysis and clustered terms that had been recorded during discussion, data presentation by way of document production, creation of a dynamic rubric, and application of the resulting dynamic rubric in teaching and grading (p. 82). The resulting descriptors were catalogued under three headings—high (above passing), medium (passing), and low (below passing)—and called an “UnRubric” guide to assessing “variety in performance within common values” rather than serving as a compliance instrument (p. 96). The authors point out that the language of the UnRubric promotes assessment based on qualities apparent in student writing rather than by degree of compliance with requirements. Harrington and Weeden reported that the DCM process reduced discontent with the curriculum (p. 95). IUPUI’s successful collaborative discussion of the DCM process among faculty, plus similar successes within other institutions and programs, suggests to me as a WAC director that the process is worth trying for devising assessment instruments and consensus building.

In the final DCM project in this book (Chapter 6), Eric Stalions presents his work while a graduate student at Bowling Green State University. His purpose was to develop a qualitative and quantitative research approach to assessing placement decisions in the General Studies Writing program, and to “close the loop” between assessment and curriculum. Working with transcripts of four placement evaluator pairs and the coordinator’s program training and documents, Stalions developed a dynamic criteria map for each of three placement options. He explored evaluative criteria found in collected data that had not been described in existing program placement criteria, and observed that placement readers “expressed…a desire to be persuaded” in their assessment decisions (p. 136).

Stalions suggests that criteria used frequently by placement evaluators, but not included in assessment values, should be discussed and articulated to affect course assessment and curriculum. This is somewhat like returning to a played-out placer bed and panning for smaller flakes of gold left behind or ignored in the initial process. The newly discovered flakes are just as precious as those that came before. Similarly, criteria found in the DCM process are valuable, perhaps critical, to assessing the whole value of a piece of student writing and influencing teaching practices. The refinement of known and newly discovered values adds currency to institutional placement assessment and pedagogical aims.

Broad returns in Chapter 7 to summarize, to synthesize DCM processes, and to query what has been learned. He also respectfully objects to Brian Huot’s 2008 call at the Conference on College Composition and Communication for government regulation of writing assessment, asking instead whether organic assessment through DCM might change the face of higher education. While Broad agrees that government oversight of the testing industry is needed, he argues that home-grown assessment like DCM processes may be the answer. I mostly agree with Broad; however, I do not see DCM as a panacea that fits into all institutional environments. However, from the projects collected in Organic Writing Assessment, it is clear that DCM has only begun to seed itself across academia and that much can be expected from its widespread planting.


Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press.

Guba, Egon G., and Yvonna S. Lincoln. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.

Latour, Bruno. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory. New York, NY: Oxford University Press.

Part I: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Part I:  Review of Norbert Elliot’s and Les Perelman’s (Eds). Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing assessment in the 21st century:  Essays in honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee

Writing Assessment inthe 21st Century: Essays in Honor of Edward M. White is written as “a tribute in [Ed White’s] honor. In this testament to White’s ability to work across disciplinary boundaries, the collection is also a documentary, broadly conceived, of the states of writing assessment practice in the early 21st century” (p. 2). That emphasis on interdisciplinary collaboration to develop ethical assessment methods is evident throughout the introduction and book as a whole. It is also, Norbert Elliot and Les Perelman argue, one of White’s significant contributions to the field.

Elliot and Perelman explain how Writing Assessment developed out of a celebration on the 25th anniversary of Ed White’s Teaching and Assessing Writing at the 2010 Conference on College Composition and Communication and the subsequent open-source Web site dedicated to collaboration among contributors “to document the state of practice of writing assessment in the early 21st century” (p. 12). Most generally, Writing Assessment in the 21st Century traces the history of writing assessment to provide readers with an understanding of the field and suggestions for where we might head in the future.

As a PhD candidate in Rhetoric and Composition with research areas in composition pedagogy, multilingual writing, and writing assessment, I find the book helpful in a number of ways. I appreciate seeing White’s call to encourage interdisciplinarity within writing assessment in action, as Writing Assessment’s 35 chapters include familiar names in writing assessment and composition studies (including this journal’s editors) – as well as directors of the National Writing Project, Educational Testing Service (ETS), writing-across-the-curriculum programs, federal governmental agencies, and scholars in technical communication and second language writing.

Because it is a hefty tome – over 500 pages – I will review Writing Assessment in the 21st Century in a series of posts. The first (this one) will consider the first of Writing Assessment’s four sections, and will be followed by individual posts for each section along with a final post to discuss the book as a whole. Part I: “The Landscape of Contemporary Writing Assessment” helps situate readers and demonstrates the breadth of writing assessment as it addresses how shifts within the field have come to influence our practices as educators and assessors of writing.

The result is refreshing: As I read the first section, I felt comfortable (“Oh, I recognize this idea!”) and challenged (“Wait, there’s more to understand the Harvard Entrance Exams than we’ve written about in the past hundred plus years?”). Sherry Seale Swain and Paul Le Mahieu’s “Assessment in a Culture of Inquiry,” for example, discuss how the National Writing Project created the Analytic Writing Continuum as “an opportunity to explore the potential of assessment that is locally contextualized yet linked to a common national framework and standards of performance” by including K-16 teachers, researchers, and educational testing experts (p. 46). In this sense, the book affirms White’s position on writing assessment; Swain and LeMahieu document the positive results that occur when we collaborate across disciplinary boundaries.

Margaret Hundleby’s chapter, “The Questions of Assessment in Technical and Professional Communication,” raises many questions for me, someone who has had jobs but no coursework in technical and professional communication (TPC). Hundleby presents new ideas of validity to me as she describes dominant methods of TPC assessment in the post-World War II era, where scholars “[used] measurement to demonstrate both that the communication products could be relied on, and that the communicator was valid, or fully professional” (p.119). What does it mean to be “fully professional?” How might assessments in composition studies change if we used that form of validity? How does it affect a piece of TPC writing?

Similarly, chapters by ETS researchers cause me to ask new questions, particularly in light of my first experience as an AP exam reader this summer. In “Rethinking K-12 Writing Assessment” by Paul Deane, he states: “We start by considering writing as a construct, viewed both socially and cognitively in terms of our competency model,” which initially raised some flags for me – how can we begin with assessing students’ competencies, particularly in a standardized exam (p. 90)? But the chapter encouraged me to be more open-minded about education testing companies, too, as I realized Deane and ETS value writing as situated in local contexts, reflecting cultural practices (p. 88 and 97) and assessment as a method to reflect upon and improve teaching (p. 95). I still need to be convinced on the benefits of automated scoring, but Writing Assessment allows me to read ideas and research from a broader spectrum than I might ordinarily, and to realize we writing assessment folks share many core values.

Next: Part II: “Strategies in Contemporary Writing Assessment”

JWA at IWAC conference in Savannah

The Journal of Writing Assessment wanted to give a special thanks to some people who helped promote JWA at the recent International Writing Across the Curriculum conference in Savannah, Georgia.

First, thanks to Nick Carbone of Bedford/St. Martin’s who gave us space at his table to distribute our fliers.  This space was centrally located and was in a high traffic flow area of the conference.  Thank you so much, Nick, for your support!

Secondly, we want to thank Twenty Six LLC for featuring JWA on their banner as part of the portfolio of their work. Twenty Six LLC designed and hosts the JWA website and we really appreciate their excellent work!

Thanks again!  The IWAC conference was excellent and there were many sessions that focused on issues of writing assessment.  We welcome submissions from this conference to JWA!

Diane Kelly-Riley and Peggy O’Neill, Editors

Susan Callahan’s review of George Hillock’s _The Testing Trap: How State Writing Assessments Control Learning_.

Here is another review from the Journal of Writing Assessment’s archives:

Please read Susan Callahan’s review:  “Testing the tests” from Volume 2 Number 1 of the Journal of Writing Assessment from Spring 2005.

Callahan reviews George Hillock’s The testing trap:  How state writing assessments control learning. New York:  Teachers College Press,   Pub Date: April 2002, 240 pages Paperback: $23.95, ISBN: 0807742295 Cloth: $54, ISBN: 0807742309

Anthony Edgington’s review of Lad Tobin’s _Reading Student Writing: Confessions, Mediations, and Rants_

Here is another review from the JWA archives:

Please read Anthony Edgington’s “Understanding Student Writing–Understanding Teachers Reading Contextualizing Reading and Response” from Volume 2 Number 1 of the Journal of Writing Assessment.

Edgington reviews Lad Tobin’s Reading Student Writing: Confessions, Mediations, and Rants  Portsmouth, NH: Boynton/Cook, 2004. 416 pp. Paper $34.50, ISBN 1-57273-394-2.

Terry Underwood’s review of Liz Hamp-Lyons and William Condon’s _Assessing the Portfolio_

The Journal of Writing Assessment has many reviews in its archives.

Please read Terry Underwood’s “Portfolios across the centuries:  A review of Assessing the Portfolio from Volume 1 Number 2 of JWA.

You can find out more information about this text here:  Hamp-Lyons, L. and Condon, W. (2000). Assessing the portfolio:  Principles for practice, theory and research.  Cresskill, NJ:  Hampton Press.

Review of Sandra Murphy and Terry Underwood’s _Portfolio Practices: Lessons from Schools, Districts and States_

Murphy, S. and Underwood, T. (2000).  Portfolio practices:  Lessons from schools, districts and states.  Norwood, MA:  Christopher-Gordon.

As we start the JWA Reading List, we want to highlight some of the past reviews of noteworthy books on writing assessment that are available in the archives the Journal of Writing Assessment.  All of these reviews are available as free downloads.

To begin, we’d like to draw your attention to Susan Callahan’s 2003 review of Sandra Murphy and Terry Underwoods’s Portfolio Practices:  Lessons from Schools, Districts and States published in 2000 by Christopher-Gordon.

Diane Kelly-Riley and Peggy O’Neill, Editors
Journal of Writing Assessment

Review of Michael Neal’s _Writing Assessment and the Revolution in Digital Texts and Technologies_

Neal, M. R. (2010). Writing assessment and the revolution in digital texts and technologies. New York: Teachers College Press. 168 pgs.

by Peggy O’Neill, Loyola University Maryland

In this text, Neal offers a comprehensive look at the intersection of writing, assessment and digital technology that is appropriate for both writing teachers and researchers. He draws on a breadth of sources, clearly articulating complex ideas with minimal jargon. He also uses many examples from his own experiences as a college writing instructor, program administrator, assessment researcher, and parent. These anecdotes keep theoretical discussions grounded in the realities we all face whether in the classroom or the conference room. He provides practical advice for evaluating multimedia texts and frankly addresses many of the challenges these texts pose for instructors.

The text is a good source for teachers, scholars, and program administrators regardless of their expertise in writing assessment or digital technology. Both of these areas, after all, are here to stay whether we want them or not, and both are influencing what happens in our programs and classrooms. You can preview the Table of Contents and read the foreword by Janet Swenson and part of Neal’s introduction here.

The text is divided into two parts: In Part I, Neal explores writing assessment as a technology and then in Part II shifts to focus on writing assessment with technology. He aims to convince readers that we have a limited opportunity “to reframe our approaches to writing assessment so that they promote a rich and robust understanding of language and literacy” (p. 5).

Neal doesn’t waste time arguing about whether or not we should include multimedia texts in writing courses. As he says, multimedia writing (which may also go by other names such as hypertechs, new media, hypermedia, digital composing) is increasingly part of the world beyond the classroom as well as inside it. Instead, Neal examines how this shift influences writing instruction and assessment. In fact, Neal seems to see multimedia writing as a means of challenging the narrowly defined tasks currently associated with large scale testing, which continues to privilege timed, impromptu essays (often written by hand).

As a reader, I found Neal’s text well informed and easy to read. He starts by situating writing assessment as a technology, then reviews different critical stances toward technology in general and the implications of these positions for writing assessment. The discussion is wide ranging, drawing on scholars familiar to most compositionists such as Brian Huot, Cindy Selfe, Cheryl Ball, Anne Wysocki, and Christine Haas, as well as those coming from other traditions such as Langdon Winner, George Madaus, N. Katherine Hayles, and Marita Sturken and Douglas Thomas.

Neal weaves these sources together to identify the underlying assumptions and cultural narratives that characterize writing assessments as technologies. He articulates the tensions that exist between multimedia literacies of the 21st century and the assessments rooted in 20th century—writing and writing courses becoming more multimodal, and assessments of writing becoming more mechanized (think of machine scoring). The disconnect, as Neal says in various ways throughout the text, is not lost on teachers who realize that students compose in a variety of formats outside of the classroom and who often have to meet learning outcomes that include multimedia literacies, but who also must prepare students for exams that privilege traditional impromptu essays.

Neal sees several strategies for resolving—or at least lessening—the tension between emerging literacies and writing assessments. He advocates getting involved in decision-making about assessments, admitting that it is often difficult whether at local or national levels. At the classroom or program-level, he provides some practical information on how to develop appropriate evaluation criteria for responding to student projects.

He also looks to construct validity to “provide a framework that can help us at a most fundamental level in determining which digital assessment technologies to include in our writing classes, curriculum, and pedagogy” (p. 112).  Neal’s argument here, though technical, is accessible to readers who are not assessment experts. He explains how construct validity allows us to determine the appropriateness, accuracy and social consequences of multimedia writing and assessment technologies.

Neal also advocates using writing outcomes, such as the WPA Outcomes for First-Year Composition, as a framework for thinking through what kinds of hypertechs to include or exclude from a course or program. Outcomes, he explains, can “provide a starting point to talk about the ways in which the content of composition” is changing in terms of digital literacies and technologies (p. 120).


While Neal admits to being an advocate of new media, he avoids coming off as a zealot. As someone who is far from an early adopter, I appreciate his willingness to present a more balanced approach. For example, he admits that hyperattention—which is characterized by multiple streams of information, rapid switching from one task to another, a desire for high levels of stimulation, and a low tolerance for boredom, according to N. Katherine Hayles—is a controversial characteristic of the digital revolution. For many academics, as he notes, it is one that is quite troubling.

Overall, Neal’s text addresses important components of both writing assessment and digital technologies relevant to all of us involved in the teaching of writing.