White, E.M., Elliot N., & Peckham, I. (2015). Very like a whale: The assessment of writing programs. Logan: Utah State University Press.
By Peggy O’Neill, Loyola University Maryland
This volume offers readers a model for writing program assessment grounded in an overview of relevant theory and practice as well as case studies of two writing programs—Louisiana State University’s, where Peckham was the WPA, and New Jersey Institute of Technology’s, where Elliot served for many years. The text is organized into five main chapters—Trends, Lessons, Foundations, Measurement, and Design. It opens with an introduction and ends with a glossary of terms, references and the index. The text also includes 17 tables and 13 figures, one of which is the model for a genre of writing program assessment that the authors are putting forth (see Fig 1.1 and 5.1).
The introduction, which is available on the publisher’s website, summarizes each of the chapters and explains the authors’ approach and the framework of the text. While it provides standard features such as a summary of each chapter, it also explains the title this way: “With AARP cards embedded firmly in their wallets, the three seniors, formally educated in literary studies, selected a passage from Hamletfor the title” (p. 2). This opening threw me off, as a reader, (although I was wondering why this title) because of the way it positioned the authors and left me wondering why they situated themselves this way. A few paragraphs later, when articulating the audience for the book, they ask readers to “Imagine running into the three authors . . . at the annual meeting of the Conference on College Composition and Communication” (p. 4). They then present a dialog—“Let’s imagine just such a conversation” (p. 4) to illustrate “the tone for [their] book” (p. 4), which they describe as “chatting with colleagues and students” (p. 4). At this point, I was not sure where this book was going or what it was doing and felt a bit exasperated at the tone of the opening. However, the introduction then proceeds into a more straightforward overview of their approach and the chapter summaries.
The chatty tone that opened the book pops up now and again throughout the text. As a reader I found myself rushing through passages that address the reader directly (e.g., “Because the LSU case study is the first of four complex studies, you may want to review it briefly now and then review it again after completing the book” [p. 39]) or give background information that seems unnecessary (e.g., the brief tangent about the philosopher who distinguished between nomothetic and idiographic knowledge and the reference to Henry Fielding’s comment about The History of Tom Jones to make a point about history [p. 73]). For the most part, however, the book is more focused, which I think is the authors’ goal.
No doubt, readers charged with conducting program review, which the authors define “as the process of documenting and reflecting on the impact of the program’s coordinated efforts” (p. 3), will benefit from the explanation of theory, methods, and practice that the authors offer. They seek, in their words, “to make clear and available recent and important concepts associated with assessment to those in the profession of rhetoric and composition/writing studies” (p. 3).
In keeping with this goal, the authors provide a range of strategies, examples, and best practices for conducting a program assessment, grounded in the scholarship from writing studies as well as educational measurement. The strategies and approaches aren’t necessarily presented step-by-step so readers will need to read through the text and pull what they want if they are looking for a guide.
Although the case studies can help readers understand different questions and documentation methods, the level of detail sometimes seemed too much. While I realize case studies require detail, I felt some details were not important or were distracting, such as a brief history of WAC (p.50). Or, in another example, referencing tagmemics (p. 103) and Toulmin (p. 104) in discussing the way eportfolios would be evaluated seemed beyond the needs of most readers. Yet, I found myself wanting more explanation at other times. In discussing the assessment of eportfolios for a Writing about Science, Technology and Society, for instance, the explanation of the interreader reliability rates (p. 56-7) and the conclusions drawn from that information seemed to need more explanation, especially for readers less experienced with assessment. It also wasn’t clear how the data presented on interrater reliability demonstrated that students are improving over time (p 57). Although the authors explain the reasoning about student improvement, there seems to be a missing piece here. Yes, scores improved over the five years, but does that mean student writing improved? I am assuming different students were tested and other variables were in play (although admission test scores were consistent, they note). In other words, if the authors are assuming that many readers need basic information on WAC and WID, then I would expect that readers would need a more complete and nuanced explanation of the technical data and analyses.
Lists of questions, such as that found in Chapter 3, Lessons, (p. 67) or the scoring sheet for a technical communication eportfolio in the same chapter (p. 56), can be of interest to readers looking for help in designing their own program assessments. Sharing examples of how eportfolios have been used is valuable for those of us trying to convince administrators to invest in the technology and faculty development time needed to implement them, yet I think this is somewhat limited view of the potential of eportfolios.
In addition to some practical examples, readers will get a sense of educational and writing theories that inform the authors’ approach to writing program assessment. However, the authors want to focus on more than practice—that is, how to conduct a program assessment. They want to contribute to the theoretical concept of writing program assessment: the “main purpose of this book,” they explain, is “to advance the concept of writing program assessment as a unique genre in which constructs are modeled for students within unique institutional ecologies” (p. 7).
The book seems to achieve its first goal—providing readers with practical approaches and strategies—which is, I imagine, what most readers will be interested in. The second goal, to propose a genre of writing program assessment, is a bit more ambitious. While the model is unveiled in the first chapter, it is explained more fully in the last one, where it is presented with each of its nine components explained in more detail. Before delving into each of the components, the authors review fourteen of the key concepts that they have used throughout the book. These concepts are more general about the field of rhetoric and composition/writing studies (e.g., “Epistemologically, advancement of our field is best made by both disciplinary and multidisciplinary inquiry” [p. 151]); measurement (e.g., “In matters of measurement, analyses are most useful if they adhere to important reporting standards, including construct definitions” [p. 152]); and writing program assessment (e.g., “Imagining a predictable future for the assessment of writing programs reveals a need for attending to . . . . “ [p. 152]).
From here, the authors expound on their model, reminding readers that “acceptance of the model” (p. 153) is predicated on validity as Messick defined it in 1989: that validity is at the core of assessment and that it involves making a theoretical and empirical argument about the “adequacy and appropriateness of the inferencesand actions based on test scores or other modes of assessment” (Messick, qtd. in White, Elliot, and Peckham, p. 154).
Their proposed model of assessment of writing programs, which is presented as a flow chart that loops around with the results feeding back into the writing program, is then explained. Although some of the terminology and/or concepts in the framework are unfamiliar in writing program assessment literature (e.g., standpoint) most of it will seem very familiar to those involved in assessment theory and practice (e.g., construct or documentation) or in writing program administration (e.g., communication). All in all, I didn’t think the actual processes, strategies, and approaches for program assessment presented in this monograph are all that new or different; instead, I think the book provides an overview of work in writing and writing program assessment that has been going on for the last several decades, pulling it together and presenting it in an attempt to link it to the broader fields of both writing studies and educational measurement.
Source: jwa