Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

By Donna Evans, Eastern Oregon University

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press. 174 pgs.

In this text, co-authors from five different institutions have answered Broad’s call “to move beyond traditional assessment practices that over-simplify learning, teaching, and assessment, and to ‘embrace the life of things’” (p. 5). Relying primarily on Dynamic Criteria Mapping (DCM) methodology, first described by Broad in What We Really Value (2003), each project is designed to be rhetorically responsive to a unique institutional audience and investigational purpose. As a result, the processes, products, and analyses they report support the premise that what writing assessment experts increasingly value—locally grown, organic assessment—can be brought to fruition and yield bumper harvests of usable data.

Interestingly, the authors drafted their text in a dynamic form about a dynamic process. Broad supplies the first and last chapters, interchapters appear between most chapters, co-authors embed paragraph-long comments within the text of Chapters 2 through 6, and Broad is referenced throughout, creating actant (a force for change) traces of network structure. With disparate researchers and studies coming together to achieve a like purpose of revealing DCM, then falling away to become unique entities when the purpose has been served, I see this text working as an actor network (Latour, 2007). Together, actors and actants exert strength of purpose in support of DCM. This form is apparent in the paperback and in the Adobe Digital Edition, but the layout of the Kindle edition obscures the elegance of the authors’ dialogues.

This text tells research stories useful to anyone interested in shaping assessment tools in local contexts, whether in classrooms, programs, departments, or across institutions. While DCM is primarily aimed at writing assessment, other uses are evident in the text’s inclusion of critical thinking and learning across the curriculum assessment. Some early reviewers perceived DCM as another approach to traditional rubrics, and Broad’s co-authors also express concern that their processes have slipped toward rubrics. But Broad dispels them, reaffirming that local ownership accounts for variation in authentic DCM models. As a reader, I agree and have already begun planning assessment projects using the DCM process.

Broad reviews the theoretical foundation of DCM in Chapter 1. He writes, “Inspired by Guba and Lincoln’s Fourth Generation Evaluation (1989) and Glaser and Strauss’s grounded theory (1967), the DCM approach promotes inductive (democratic) and empirical (ethnographic) methods for generating accurate and useful accounts of what faculty and administrators value in their students’ work” (p. 5). Because I have used Guba and Lincoln’s methods to gather quantitative and qualitative data in my own research, DCM seems intuitive, a natural extension of proven procedures. Some reviewers of Broad’s earlier book saw DCM as too labor intensive, impracticable, and just another approach to traditional rubrics (p. 5). An important distinction of DCM, observed by Belanoff and Denny (2006), is “‘that [such a rubric] will be applicable only within the context in which it is created’ (135)” (pp. 5-6). However, the five DCM projects presented in Organic Writing Assessment show that the flexible, home-grown application of DCM makes good use of time and labor, and produces usable criteria maps that occasionally include rubrics. These models show that DCM is doable, and that, while the first purpose is to create home-grown assessment, the process is transferable across institutional and departmental boundaries. And while Broad’s co-authors express concern that their maps are too close to rubrics to be authentic DCM models, Broad assures them that they are “not only ‘legitimate’ practitioners of DCM but also pioneers of the next generation of praxis in large-scale writing assessment and faculty professional development” (p. 12). You can preview this introduction here.

In Chapter 2, Linda Adler-Kastner and Heidi Estrem discuss their DCM approach with a programmatic assessment of English 121, a required general education writing course at Eastern Michigan University. Students reported increased confidence with writing from beginning to end of the course, part of a two-year writing sequence focusing on place and genre. But administrators wanted to know what experts—not only students—said about students’ writing. In response, the authors employed a DCM protocol that evolved to include focus groups made up of students, faculty, staff, and administrators. Results of this DCM assessment process have influenced professional development and curriculum trajectories, generated interest among writing program administrators, and provided data to support the program. In my opinion, such robust generation of rich data makes DCM worthy of consideration.

Barry Alford of Mid Michigan Community College (MMCC)—the only two-year college represented in the text—explains in Chapter 3 that his colleagues view DCM as an acceptable institutional assessment method. This project is particularly interesting because it is aimed at opening up conversation among disciplinary faculty and uncovering information useful for teaching among faculty with heavy teaching loads and separated by disparate educational goals. Alford writes that differences among faculty in such environments “are so extreme that many institutions avoid even trying to assess common student outcomes” (p. 37). But by relying on already expressed values and existing student work, Alford and the MMCC faculty used DCM to uncover concepts hidden behind seemingly unrelated disciplinary content and student projects. Their process led to creation of a map with three criteria: 1) working from multiple perspectives; 2) application; and 3) communication and presentation skills (p. 42). Disciplinary faculty were then asked to identify where and how these valued criteria were measured in their courses.

In focusing upon student improvement rather than upon testing instruments, the MMCC dynamic criteria map moves the institution away from a compliance model, the dominant form of assessment at the community college level. I find this example intriguing because it exemplifies the potential of a bottom-up assessment method to inform institutional values, invite interdisciplinary conversation and collaboration, and, most importantly, benefit students. Also, by beginning with the institution’s expressed values and going beyond (or behind) them to identify concepts, Alford has shown that the work of developing a dynamic criteria map does not have to begin at ground zero.

In Chapter 4, Jane Detweiler and Maureen McBride of the University of Nevada, Reno (UNR) discuss DCM in vertical assessment of first year writing and critical thinking. In anticipation of faculty resistance to heavy time commitment, students were interned to facilitate assessment. Detweiler, McBride, and six interns received low survey participation, but the DCM process continued with focus groups comprised of instructors who were asked to create movie posters depicting their assessment concerns, followed by lists of values.

The UNR team developed a star-shaped assessment model with numerical values along its arms for scoring, yielding statistically significant data. The map was accompanied by a scoring guide (a matrix with teacher-generated descriptors) and a comment sheet (space for three entries related to issues noticed but not scored on the map, and three entries related to issues that had been scored) (p. 66). This DCM process, including qualitative and quantitative research, has influenced UNR’s teacher preparation and continued assessment, providing a means of “closing the loop.” I find UNR’s map to be an accessible, usable assessment tool. During portfolio assessment, dots assigned numerical values are connected across arms to create visual images that can be quickly interpreted and sorted. The map also provides space for comments on criteria that might be included in a later iteration of the assessment map.

In Chapter 5, Susanmarie Harrington and Scott Weeden at Indiana University Purdue University Indianapolis (IUPUI) tell how changes in the writing program’s faculty plus motivation to revise course goals and teaching approaches had increased tensions in the department. In “address[ing] the failings in rubrics” that allow a single grade or adjective to represent complex ideas, Harrington and Weeden led writing faculty to seek detail through DCM (p. 78). Their process evolved to include discussion of sample portfolios, analysis and clustered terms that had been recorded during discussion, data presentation by way of document production, creation of a dynamic rubric, and application of the resulting dynamic rubric in teaching and grading (p. 82). The resulting descriptors were catalogued under three headings—high (above passing), medium (passing), and low (below passing)—and called an “UnRubric” guide to assessing “variety in performance within common values” rather than serving as a compliance instrument (p. 96). The authors point out that the language of the UnRubric promotes assessment based on qualities apparent in student writing rather than by degree of compliance with requirements. Harrington and Weeden reported that the DCM process reduced discontent with the curriculum (p. 95). IUPUI’s successful collaborative discussion of the DCM process among faculty, plus similar successes within other institutions and programs, suggests to me as a WAC director that the process is worth trying for devising assessment instruments and consensus building.

In the final DCM project in this book (Chapter 6), Eric Stalions presents his work while a graduate student at Bowling Green State University. His purpose was to develop a qualitative and quantitative research approach to assessing placement decisions in the General Studies Writing program, and to “close the loop” between assessment and curriculum. Working with transcripts of four placement evaluator pairs and the coordinator’s program training and documents, Stalions developed a dynamic criteria map for each of three placement options. He explored evaluative criteria found in collected data that had not been described in existing program placement criteria, and observed that placement readers “expressed…a desire to be persuaded” in their assessment decisions (p. 136).

Stalions suggests that criteria used frequently by placement evaluators, but not included in assessment values, should be discussed and articulated to affect course assessment and curriculum. This is somewhat like returning to a played-out placer bed and panning for smaller flakes of gold left behind or ignored in the initial process. The newly discovered flakes are just as precious as those that came before. Similarly, criteria found in the DCM process are valuable, perhaps critical, to assessing the whole value of a piece of student writing and influencing teaching practices. The refinement of known and newly discovered values adds currency to institutional placement assessment and pedagogical aims.

Broad returns in Chapter 7 to summarize, to synthesize DCM processes, and to query what has been learned. He also respectfully objects to Brian Huot’s 2008 call at the Conference on College Composition and Communication for government regulation of writing assessment, asking instead whether organic assessment through DCM might change the face of higher education. While Broad agrees that government oversight of the testing industry is needed, he argues that home-grown assessment like DCM processes may be the answer. I mostly agree with Broad; however, I do not see DCM as a panacea that fits into all institutional environments. However, from the projects collected in Organic Writing Assessment, it is clear that DCM has only begun to seed itself across academia and that much can be expected from its widespread planting.

References

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press.

Guba, Egon G., and Yvonna S. Lincoln. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.

Latour, Bruno. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory. New York, NY: Oxford University Press.

Part I: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Part I:  Review of Norbert Elliot’s and Les Perelman’s (Eds). Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing assessment in the 21st century:  Essays in honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee

Writing Assessment inthe 21st Century: Essays in Honor of Edward M. White is written as “a tribute in [Ed White’s] honor. In this testament to White’s ability to work across disciplinary boundaries, the collection is also a documentary, broadly conceived, of the states of writing assessment practice in the early 21st century” (p. 2). That emphasis on interdisciplinary collaboration to develop ethical assessment methods is evident throughout the introduction and book as a whole. It is also, Norbert Elliot and Les Perelman argue, one of White’s significant contributions to the field.

Elliot and Perelman explain how Writing Assessment developed out of a celebration on the 25th anniversary of Ed White’s Teaching and Assessing Writing at the 2010 Conference on College Composition and Communication and the subsequent open-source Web site dedicated to collaboration among contributors “to document the state of practice of writing assessment in the early 21st century” (p. 12). Most generally, Writing Assessment in the 21st Century traces the history of writing assessment to provide readers with an understanding of the field and suggestions for where we might head in the future.

As a PhD candidate in Rhetoric and Composition with research areas in composition pedagogy, multilingual writing, and writing assessment, I find the book helpful in a number of ways. I appreciate seeing White’s call to encourage interdisciplinarity within writing assessment in action, as Writing Assessment’s 35 chapters include familiar names in writing assessment and composition studies (including this journal’s editors) – as well as directors of the National Writing Project, Educational Testing Service (ETS), writing-across-the-curriculum programs, federal governmental agencies, and scholars in technical communication and second language writing.

Because it is a hefty tome – over 500 pages – I will review Writing Assessment in the 21st Century in a series of posts. The first (this one) will consider the first of Writing Assessment’s four sections, and will be followed by individual posts for each section along with a final post to discuss the book as a whole. Part I: “The Landscape of Contemporary Writing Assessment” helps situate readers and demonstrates the breadth of writing assessment as it addresses how shifts within the field have come to influence our practices as educators and assessors of writing.

The result is refreshing: As I read the first section, I felt comfortable (“Oh, I recognize this idea!”) and challenged (“Wait, there’s more to understand the Harvard Entrance Exams than we’ve written about in the past hundred plus years?”). Sherry Seale Swain and Paul Le Mahieu’s “Assessment in a Culture of Inquiry,” for example, discuss how the National Writing Project created the Analytic Writing Continuum as “an opportunity to explore the potential of assessment that is locally contextualized yet linked to a common national framework and standards of performance” by including K-16 teachers, researchers, and educational testing experts (p. 46). In this sense, the book affirms White’s position on writing assessment; Swain and LeMahieu document the positive results that occur when we collaborate across disciplinary boundaries.

Margaret Hundleby’s chapter, “The Questions of Assessment in Technical and Professional Communication,” raises many questions for me, someone who has had jobs but no coursework in technical and professional communication (TPC). Hundleby presents new ideas of validity to me as she describes dominant methods of TPC assessment in the post-World War II era, where scholars “[used] measurement to demonstrate both that the communication products could be relied on, and that the communicator was valid, or fully professional” (p.119). What does it mean to be “fully professional?” How might assessments in composition studies change if we used that form of validity? How does it affect a piece of TPC writing?

Similarly, chapters by ETS researchers cause me to ask new questions, particularly in light of my first experience as an AP exam reader this summer. In “Rethinking K-12 Writing Assessment” by Paul Deane, he states: “We start by considering writing as a construct, viewed both socially and cognitively in terms of our competency model,” which initially raised some flags for me – how can we begin with assessing students’ competencies, particularly in a standardized exam (p. 90)? But the chapter encouraged me to be more open-minded about education testing companies, too, as I realized Deane and ETS value writing as situated in local contexts, reflecting cultural practices (p. 88 and 97) and assessment as a method to reflect upon and improve teaching (p. 95). I still need to be convinced on the benefits of automated scoring, but Writing Assessment allows me to read ideas and research from a broader spectrum than I might ordinarily, and to realize we writing assessment folks share many core values.

Next: Part II: “Strategies in Contemporary Writing Assessment”