CFP: Responses to Common Core State Standards, Smarter Balanced Assessment Consortium and Partnership for Assessment of Readiness for College and Career

The Journal of Writing Assessment is interested in scholars’ response to the writing assessments connected with the Common Core State Standards (www.corestandards.org) that are in development. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the assessments, including approach, use of technology, and sample items. While it is too early to have any full-fledged-research about the specific writing assessments, theoretical discussions and critical reviews of material released from SBAC (http://www.smarterbalanced.org/)  and PARCC (http://www.parcconline.org/) are welcome.

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

For more information, and for submission guidelines, visit JWA online http://www.journalofwritingassessment.org/.

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

Review of Bob Broad et al.’s Organic Writing Assessment: Dynamic Criteria Mapping in Action

By Donna Evans, Eastern Oregon University

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press. 174 pgs.

In this text, co-authors from five different institutions have answered Broad’s call “to move beyond traditional assessment practices that over-simplify learning, teaching, and assessment, and to ‘embrace the life of things’” (p. 5). Relying primarily on Dynamic Criteria Mapping (DCM) methodology, first described by Broad in What We Really Value (2003), each project is designed to be rhetorically responsive to a unique institutional audience and investigational purpose. As a result, the processes, products, and analyses they report support the premise that what writing assessment experts increasingly value—locally grown, organic assessment—can be brought to fruition and yield bumper harvests of usable data.

Interestingly, the authors drafted their text in a dynamic form about a dynamic process. Broad supplies the first and last chapters, interchapters appear between most chapters, co-authors embed paragraph-long comments within the text of Chapters 2 through 6, and Broad is referenced throughout, creating actant (a force for change) traces of network structure. With disparate researchers and studies coming together to achieve a like purpose of revealing DCM, then falling away to become unique entities when the purpose has been served, I see this text working as an actor network (Latour, 2007). Together, actors and actants exert strength of purpose in support of DCM. This form is apparent in the paperback and in the Adobe Digital Edition, but the layout of the Kindle edition obscures the elegance of the authors’ dialogues.

This text tells research stories useful to anyone interested in shaping assessment tools in local contexts, whether in classrooms, programs, departments, or across institutions. While DCM is primarily aimed at writing assessment, other uses are evident in the text’s inclusion of critical thinking and learning across the curriculum assessment. Some early reviewers perceived DCM as another approach to traditional rubrics, and Broad’s co-authors also express concern that their processes have slipped toward rubrics. But Broad dispels them, reaffirming that local ownership accounts for variation in authentic DCM models. As a reader, I agree and have already begun planning assessment projects using the DCM process.

Broad reviews the theoretical foundation of DCM in Chapter 1. He writes, “Inspired by Guba and Lincoln’s Fourth Generation Evaluation (1989) and Glaser and Strauss’s grounded theory (1967), the DCM approach promotes inductive (democratic) and empirical (ethnographic) methods for generating accurate and useful accounts of what faculty and administrators value in their students’ work” (p. 5). Because I have used Guba and Lincoln’s methods to gather quantitative and qualitative data in my own research, DCM seems intuitive, a natural extension of proven procedures. Some reviewers of Broad’s earlier book saw DCM as too labor intensive, impracticable, and just another approach to traditional rubrics (p. 5). An important distinction of DCM, observed by Belanoff and Denny (2006), is “‘that [such a rubric] will be applicable only within the context in which it is created’ (135)” (pp. 5-6). However, the five DCM projects presented in Organic Writing Assessment show that the flexible, home-grown application of DCM makes good use of time and labor, and produces usable criteria maps that occasionally include rubrics. These models show that DCM is doable, and that, while the first purpose is to create home-grown assessment, the process is transferable across institutional and departmental boundaries. And while Broad’s co-authors express concern that their maps are too close to rubrics to be authentic DCM models, Broad assures them that they are “not only ‘legitimate’ practitioners of DCM but also pioneers of the next generation of praxis in large-scale writing assessment and faculty professional development” (p. 12). You can preview this introduction here.

In Chapter 2, Linda Adler-Kastner and Heidi Estrem discuss their DCM approach with a programmatic assessment of English 121, a required general education writing course at Eastern Michigan University. Students reported increased confidence with writing from beginning to end of the course, part of a two-year writing sequence focusing on place and genre. But administrators wanted to know what experts—not only students—said about students’ writing. In response, the authors employed a DCM protocol that evolved to include focus groups made up of students, faculty, staff, and administrators. Results of this DCM assessment process have influenced professional development and curriculum trajectories, generated interest among writing program administrators, and provided data to support the program. In my opinion, such robust generation of rich data makes DCM worthy of consideration.

Barry Alford of Mid Michigan Community College (MMCC)—the only two-year college represented in the text—explains in Chapter 3 that his colleagues view DCM as an acceptable institutional assessment method. This project is particularly interesting because it is aimed at opening up conversation among disciplinary faculty and uncovering information useful for teaching among faculty with heavy teaching loads and separated by disparate educational goals. Alford writes that differences among faculty in such environments “are so extreme that many institutions avoid even trying to assess common student outcomes” (p. 37). But by relying on already expressed values and existing student work, Alford and the MMCC faculty used DCM to uncover concepts hidden behind seemingly unrelated disciplinary content and student projects. Their process led to creation of a map with three criteria: 1) working from multiple perspectives; 2) application; and 3) communication and presentation skills (p. 42). Disciplinary faculty were then asked to identify where and how these valued criteria were measured in their courses.

In focusing upon student improvement rather than upon testing instruments, the MMCC dynamic criteria map moves the institution away from a compliance model, the dominant form of assessment at the community college level. I find this example intriguing because it exemplifies the potential of a bottom-up assessment method to inform institutional values, invite interdisciplinary conversation and collaboration, and, most importantly, benefit students. Also, by beginning with the institution’s expressed values and going beyond (or behind) them to identify concepts, Alford has shown that the work of developing a dynamic criteria map does not have to begin at ground zero.

In Chapter 4, Jane Detweiler and Maureen McBride of the University of Nevada, Reno (UNR) discuss DCM in vertical assessment of first year writing and critical thinking. In anticipation of faculty resistance to heavy time commitment, students were interned to facilitate assessment. Detweiler, McBride, and six interns received low survey participation, but the DCM process continued with focus groups comprised of instructors who were asked to create movie posters depicting their assessment concerns, followed by lists of values.

The UNR team developed a star-shaped assessment model with numerical values along its arms for scoring, yielding statistically significant data. The map was accompanied by a scoring guide (a matrix with teacher-generated descriptors) and a comment sheet (space for three entries related to issues noticed but not scored on the map, and three entries related to issues that had been scored) (p. 66). This DCM process, including qualitative and quantitative research, has influenced UNR’s teacher preparation and continued assessment, providing a means of “closing the loop.” I find UNR’s map to be an accessible, usable assessment tool. During portfolio assessment, dots assigned numerical values are connected across arms to create visual images that can be quickly interpreted and sorted. The map also provides space for comments on criteria that might be included in a later iteration of the assessment map.

In Chapter 5, Susanmarie Harrington and Scott Weeden at Indiana University Purdue University Indianapolis (IUPUI) tell how changes in the writing program’s faculty plus motivation to revise course goals and teaching approaches had increased tensions in the department. In “address[ing] the failings in rubrics” that allow a single grade or adjective to represent complex ideas, Harrington and Weeden led writing faculty to seek detail through DCM (p. 78). Their process evolved to include discussion of sample portfolios, analysis and clustered terms that had been recorded during discussion, data presentation by way of document production, creation of a dynamic rubric, and application of the resulting dynamic rubric in teaching and grading (p. 82). The resulting descriptors were catalogued under three headings—high (above passing), medium (passing), and low (below passing)—and called an “UnRubric” guide to assessing “variety in performance within common values” rather than serving as a compliance instrument (p. 96). The authors point out that the language of the UnRubric promotes assessment based on qualities apparent in student writing rather than by degree of compliance with requirements. Harrington and Weeden reported that the DCM process reduced discontent with the curriculum (p. 95). IUPUI’s successful collaborative discussion of the DCM process among faculty, plus similar successes within other institutions and programs, suggests to me as a WAC director that the process is worth trying for devising assessment instruments and consensus building.

In the final DCM project in this book (Chapter 6), Eric Stalions presents his work while a graduate student at Bowling Green State University. His purpose was to develop a qualitative and quantitative research approach to assessing placement decisions in the General Studies Writing program, and to “close the loop” between assessment and curriculum. Working with transcripts of four placement evaluator pairs and the coordinator’s program training and documents, Stalions developed a dynamic criteria map for each of three placement options. He explored evaluative criteria found in collected data that had not been described in existing program placement criteria, and observed that placement readers “expressed…a desire to be persuaded” in their assessment decisions (p. 136).

Stalions suggests that criteria used frequently by placement evaluators, but not included in assessment values, should be discussed and articulated to affect course assessment and curriculum. This is somewhat like returning to a played-out placer bed and panning for smaller flakes of gold left behind or ignored in the initial process. The newly discovered flakes are just as precious as those that came before. Similarly, criteria found in the DCM process are valuable, perhaps critical, to assessing the whole value of a piece of student writing and influencing teaching practices. The refinement of known and newly discovered values adds currency to institutional placement assessment and pedagogical aims.

Broad returns in Chapter 7 to summarize, to synthesize DCM processes, and to query what has been learned. He also respectfully objects to Brian Huot’s 2008 call at the Conference on College Composition and Communication for government regulation of writing assessment, asking instead whether organic assessment through DCM might change the face of higher education. While Broad agrees that government oversight of the testing industry is needed, he argues that home-grown assessment like DCM processes may be the answer. I mostly agree with Broad; however, I do not see DCM as a panacea that fits into all institutional environments. However, from the projects collected in Organic Writing Assessment, it is clear that DCM has only begun to seed itself across academia and that much can be expected from its widespread planting.

References

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S.,…Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan, UT: Utah State University Press.

Guba, Egon G., and Yvonna S. Lincoln. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.

Latour, Bruno. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory. New York, NY: Oxford University Press.

Part I: Review of Norbert Elliot’s and Les Perelman’s (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Part I:  Review of Norbert Elliot’s and Les Perelman’s (Eds). Writing Assessment in the 21st Century:  Essays in Honor of Edward M. White

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing assessment in the 21st century:  Essays in honor of Edward M. White.  New York, NY:  Hampton Press.

By Jessica Nastal, University of Wisconsin-Milwaukee

Writing Assessment inthe 21st Century: Essays in Honor of Edward M. White is written as “a tribute in [Ed White’s] honor. In this testament to White’s ability to work across disciplinary boundaries, the collection is also a documentary, broadly conceived, of the states of writing assessment practice in the early 21st century” (p. 2). That emphasis on interdisciplinary collaboration to develop ethical assessment methods is evident throughout the introduction and book as a whole. It is also, Norbert Elliot and Les Perelman argue, one of White’s significant contributions to the field.

Elliot and Perelman explain how Writing Assessment developed out of a celebration on the 25th anniversary of Ed White’s Teaching and Assessing Writing at the 2010 Conference on College Composition and Communication and the subsequent open-source Web site dedicated to collaboration among contributors “to document the state of practice of writing assessment in the early 21st century” (p. 12). Most generally, Writing Assessment in the 21st Century traces the history of writing assessment to provide readers with an understanding of the field and suggestions for where we might head in the future.

As a PhD candidate in Rhetoric and Composition with research areas in composition pedagogy, multilingual writing, and writing assessment, I find the book helpful in a number of ways. I appreciate seeing White’s call to encourage interdisciplinarity within writing assessment in action, as Writing Assessment’s 35 chapters include familiar names in writing assessment and composition studies (including this journal’s editors) – as well as directors of the National Writing Project, Educational Testing Service (ETS), writing-across-the-curriculum programs, federal governmental agencies, and scholars in technical communication and second language writing.

Because it is a hefty tome – over 500 pages – I will review Writing Assessment in the 21st Century in a series of posts. The first (this one) will consider the first of Writing Assessment’s four sections, and will be followed by individual posts for each section along with a final post to discuss the book as a whole. Part I: “The Landscape of Contemporary Writing Assessment” helps situate readers and demonstrates the breadth of writing assessment as it addresses how shifts within the field have come to influence our practices as educators and assessors of writing.

The result is refreshing: As I read the first section, I felt comfortable (“Oh, I recognize this idea!”) and challenged (“Wait, there’s more to understand the Harvard Entrance Exams than we’ve written about in the past hundred plus years?”). Sherry Seale Swain and Paul Le Mahieu’s “Assessment in a Culture of Inquiry,” for example, discuss how the National Writing Project created the Analytic Writing Continuum as “an opportunity to explore the potential of assessment that is locally contextualized yet linked to a common national framework and standards of performance” by including K-16 teachers, researchers, and educational testing experts (p. 46). In this sense, the book affirms White’s position on writing assessment; Swain and LeMahieu document the positive results that occur when we collaborate across disciplinary boundaries.

Margaret Hundleby’s chapter, “The Questions of Assessment in Technical and Professional Communication,” raises many questions for me, someone who has had jobs but no coursework in technical and professional communication (TPC). Hundleby presents new ideas of validity to me as she describes dominant methods of TPC assessment in the post-World War II era, where scholars “[used] measurement to demonstrate both that the communication products could be relied on, and that the communicator was valid, or fully professional” (p.119). What does it mean to be “fully professional?” How might assessments in composition studies change if we used that form of validity? How does it affect a piece of TPC writing?

Similarly, chapters by ETS researchers cause me to ask new questions, particularly in light of my first experience as an AP exam reader this summer. In “Rethinking K-12 Writing Assessment” by Paul Deane, he states: “We start by considering writing as a construct, viewed both socially and cognitively in terms of our competency model,” which initially raised some flags for me – how can we begin with assessing students’ competencies, particularly in a standardized exam (p. 90)? But the chapter encouraged me to be more open-minded about education testing companies, too, as I realized Deane and ETS value writing as situated in local contexts, reflecting cultural practices (p. 88 and 97) and assessment as a method to reflect upon and improve teaching (p. 95). I still need to be convinced on the benefits of automated scoring, but Writing Assessment allows me to read ideas and research from a broader spectrum than I might ordinarily, and to realize we writing assessment folks share many core values.

Next: Part II: “Strategies in Contemporary Writing Assessment”

JWA at IWAC conference in Savannah

The Journal of Writing Assessment wanted to give a special thanks to some people who helped promote JWA at the recent International Writing Across the Curriculum conference in Savannah, Georgia.

First, thanks to Nick Carbone of Bedford/St. Martin’s who gave us space at his table to distribute our fliers.  This space was centrally located and was in a high traffic flow area of the conference.  Thank you so much, Nick, for your support!

Secondly, we want to thank Twenty Six LLC for featuring JWA on their banner as part of the portfolio of their work. Twenty Six LLC designed and hosts the JWA website and we really appreciate their excellent work!

Thanks again!  The IWAC conference was excellent and there were many sessions that focused on issues of writing assessment.  We welcome submissions from this conference to JWA!

Diane Kelly-Riley and Peggy O’Neill, Editors

Susan Callahan’s review of George Hillock’s _The Testing Trap: How State Writing Assessments Control Learning_.

Here is another review from the Journal of Writing Assessment’s archives:

Please read Susan Callahan’s review:  “Testing the tests” from Volume 2 Number 1 of the Journal of Writing Assessment from Spring 2005.

Callahan reviews George Hillock’s The testing trap:  How state writing assessments control learning. New York:  Teachers College Press,   Pub Date: April 2002, 240 pages Paperback: $23.95, ISBN: 0807742295 Cloth: $54, ISBN: 0807742309

Anthony Edgington’s review of Lad Tobin’s _Reading Student Writing: Confessions, Mediations, and Rants_

Here is another review from the JWA archives:

Please read Anthony Edgington’s “Understanding Student Writing–Understanding Teachers Reading Contextualizing Reading and Response” from Volume 2 Number 1 of the Journal of Writing Assessment.

Edgington reviews Lad Tobin’s Reading Student Writing: Confessions, Mediations, and Rants  Portsmouth, NH: Boynton/Cook, 2004. 416 pp. Paper $34.50, ISBN 1-57273-394-2.

Terry Underwood’s review of Liz Hamp-Lyons and William Condon’s _Assessing the Portfolio_

The Journal of Writing Assessment has many reviews in its archives.

Please read Terry Underwood’s “Portfolios across the centuries:  A review of Assessing the Portfolio from Volume 1 Number 2 of JWA.

You can find out more information about this text here:  Hamp-Lyons, L. and Condon, W. (2000). Assessing the portfolio:  Principles for practice, theory and research.  Cresskill, NJ:  Hampton Press.

Review of Sandra Murphy and Terry Underwood’s _Portfolio Practices: Lessons from Schools, Districts and States_

Murphy, S. and Underwood, T. (2000).  Portfolio practices:  Lessons from schools, districts and states.  Norwood, MA:  Christopher-Gordon.

As we start the JWA Reading List, we want to highlight some of the past reviews of noteworthy books on writing assessment that are available in the archives the Journal of Writing Assessment.  All of these reviews are available as free downloads.

To begin, we’d like to draw your attention to Susan Callahan’s 2003 review of Sandra Murphy and Terry Underwoods’s Portfolio Practices:  Lessons from Schools, Districts and States published in 2000 by Christopher-Gordon.

Diane Kelly-Riley and Peggy O’Neill, Editors
Journal of Writing Assessment

Review of Michael Neal’s _Writing Assessment and the Revolution in Digital Texts and Technologies_

Neal, M. R. (2010). Writing assessment and the revolution in digital texts and technologies. New York: Teachers College Press. 168 pgs.

by Peggy O’Neill, Loyola University Maryland

In this text, Neal offers a comprehensive look at the intersection of writing, assessment and digital technology that is appropriate for both writing teachers and researchers. He draws on a breadth of sources, clearly articulating complex ideas with minimal jargon. He also uses many examples from his own experiences as a college writing instructor, program administrator, assessment researcher, and parent. These anecdotes keep theoretical discussions grounded in the realities we all face whether in the classroom or the conference room. He provides practical advice for evaluating multimedia texts and frankly addresses many of the challenges these texts pose for instructors.

The text is a good source for teachers, scholars, and program administrators regardless of their expertise in writing assessment or digital technology. Both of these areas, after all, are here to stay whether we want them or not, and both are influencing what happens in our programs and classrooms. You can preview the Table of Contents and read the foreword by Janet Swenson and part of Neal’s introduction here.

The text is divided into two parts: In Part I, Neal explores writing assessment as a technology and then in Part II shifts to focus on writing assessment with technology. He aims to convince readers that we have a limited opportunity “to reframe our approaches to writing assessment so that they promote a rich and robust understanding of language and literacy” (p. 5).

Neal doesn’t waste time arguing about whether or not we should include multimedia texts in writing courses. As he says, multimedia writing (which may also go by other names such as hypertechs, new media, hypermedia, digital composing) is increasingly part of the world beyond the classroom as well as inside it. Instead, Neal examines how this shift influences writing instruction and assessment. In fact, Neal seems to see multimedia writing as a means of challenging the narrowly defined tasks currently associated with large scale testing, which continues to privilege timed, impromptu essays (often written by hand).

As a reader, I found Neal’s text well informed and easy to read. He starts by situating writing assessment as a technology, then reviews different critical stances toward technology in general and the implications of these positions for writing assessment. The discussion is wide ranging, drawing on scholars familiar to most compositionists such as Brian Huot, Cindy Selfe, Cheryl Ball, Anne Wysocki, and Christine Haas, as well as those coming from other traditions such as Langdon Winner, George Madaus, N. Katherine Hayles, and Marita Sturken and Douglas Thomas.

Neal weaves these sources together to identify the underlying assumptions and cultural narratives that characterize writing assessments as technologies. He articulates the tensions that exist between multimedia literacies of the 21st century and the assessments rooted in 20th century—writing and writing courses becoming more multimodal, and assessments of writing becoming more mechanized (think of machine scoring). The disconnect, as Neal says in various ways throughout the text, is not lost on teachers who realize that students compose in a variety of formats outside of the classroom and who often have to meet learning outcomes that include multimedia literacies, but who also must prepare students for exams that privilege traditional impromptu essays.

Neal sees several strategies for resolving—or at least lessening—the tension between emerging literacies and writing assessments. He advocates getting involved in decision-making about assessments, admitting that it is often difficult whether at local or national levels. At the classroom or program-level, he provides some practical information on how to develop appropriate evaluation criteria for responding to student projects.

He also looks to construct validity to “provide a framework that can help us at a most fundamental level in determining which digital assessment technologies to include in our writing classes, curriculum, and pedagogy” (p. 112).  Neal’s argument here, though technical, is accessible to readers who are not assessment experts. He explains how construct validity allows us to determine the appropriateness, accuracy and social consequences of multimedia writing and assessment technologies.

Neal also advocates using writing outcomes, such as the WPA Outcomes for First-Year Composition, as a framework for thinking through what kinds of hypertechs to include or exclude from a course or program. Outcomes, he explains, can “provide a starting point to talk about the ways in which the content of composition” is changing in terms of digital literacies and technologies (p. 120).

 

While Neal admits to being an advocate of new media, he avoids coming off as a zealot. As someone who is far from an early adopter, I appreciate his willingness to present a more balanced approach. For example, he admits that hyperattention—which is characterized by multiple streams of information, rapid switching from one task to another, a desire for high levels of stimulation, and a low tolerance for boredom, according to N. Katherine Hayles—is a controversial characteristic of the digital revolution. For many academics, as he notes, it is one that is quite troubling.

Overall, Neal’s text addresses important components of both writing assessment and digital technologies relevant to all of us involved in the teaching of writing.

Welcome

Welcome to the Journal of Writing Assessment’s Reading List!  At this site, we’ll provide quick reviews of recent writing assessment publications.  These reviews will cover the main points of the publication, provide an overview of the methodology, identify controversies and then talk about the implications for practitioners of writing assessment.