Call for Papers: Special Issue of JWA on the Common Core State Standards Assessments

Call for Papers
Special Issue of Journal of Writing Assessment
The Common Core State Standards Assessments
The Journal of Writing Assessment is interested in scholars’ and teachers’ responses to the writing assessments connected with the implementation of the Common Core State Standards assessments. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the writing assessments, including approach, use of technology, and sample items.
The assessments were piloted in 2013-14, and are being implemented in most participating states during the 2014-15 academic year. Both SBAC and PARCC are approving and releasing achievement levels based on student performance on the pilot assessments. The SBAC (http://www.smarterbalanced.org/) and PARCC (http://www.parcconline.org/) assessment instruments are reshaping the assessment of—and potentially the teaching and learning of—writing in elementary and secondary education in many states. These assessments are defining and measuring the writing skills students need for “college-and-career readiness.” This enterprise is one of the largest-scale writing assessment projects ever undertaken in the United States. Researchers need to evaluate not only the validity and reliability of these assessment instruments, but also their impacts on teaching and learning.
The Journal of Writing Assessment seeks articles that examine:
  • Theoretical stances behind the Common Core State Standards assessments,
  • Development processes for the CCSS assessment instruments,
  • Implementation of the assessments, and
  • Impacts of these assessments on writing curricula and instruction at the classroom, district, and/or state levels.

We are interested in manuscripts that explore the CCSS assessments from a variety of viewpoints including, but not limited to, empirical, historical, theoretical, qualitative, experiential and quantitative perspectives.
For inclusion in JWA 8.1, proposals (200-400 words) are due by Feb. 27, 2015 to the JWA Submission page. Full drafts of articles are due by May 31, 2015. As accepted manuscripts are developed, please follow JWA‘s guidelines for submission. Queries may be addressed to the JWA editors, Diane Kelly-Riley and Carl Whithaus, at journalofwritingassessment@gmail.com.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
For more information, visit JWA online http://www.journalofwritingassessment.org/.

Source: jwa

Call for Papers: Special Issue of JWA on the Common Core State Standards Assessments

Call for Papers
Special Issue of Journal of Writing Assessment
The Common Core State Standards Assessments
The Journal of Writing Assessment is interested in scholars’ and teachers’ responses to the writing assessments connected with the implementation of the Common Core State Standards assessments. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the writing assessments, including approach, use of technology, and sample items.
The assessments were piloted in 2013-14, and are being implemented in most participating states during the 2014-15 academic year. Both SBAC and PARCC are approving and releasing achievement levels based on student performance on the pilot assessments. The SBAC (http://www.smarterbalanced.org/) and PARCC (http://www.parcconline.org/) assessment instruments are reshaping the assessment of—and potentially the teaching and learning of—writing in elementary and secondary education in many states. These assessments are defining and measuring the writing skills students need for “college-and-career readiness.” This enterprise is one of the largest-scale writing assessment projects ever undertaken in the United States. Researchers need to evaluate not only the validity and reliability of these assessment instruments, but also their impacts on teaching and learning.
The Journal of Writing Assessment seeks articles that examine:
  • Theoretical stances behind the Common Core State Standards assessments,
  • Development processes for the CCSS assessment instruments,
  • Implementation of the assessments, and
  • Impacts of these assessments on writing curricula and instruction at the classroom, district, and/or state levels.

We are interested in manuscripts that explore the CCSS assessments from a variety of viewpoints including, but not limited to, empirical, historical, theoretical, qualitative, experiential and quantitative perspectives.
For inclusion in JWA 8.1, proposals (200-400 words) are due by Feb. 27, 2015 to the JWA Submission page. Full drafts of articles are due by May 31, 2015. As accepted manuscripts are developed, please follow JWA‘s guidelines for submission. Queries may be addressed to the JWA editors, Diane Kelly-Riley and Carl Whithaus, at journalofwritingassessment@gmail.com.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
For more information, visit JWA online http://www.journalofwritingassessment.org/.

Source: jwa

Review of_Digital Writing Assessment & Evaluation_by Heidi A. McKee and Danielle Nicole DeVoss, Editors

Review of McKee, H. A., & DeVoss, D. N. (Eds.). (2013). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae.

ISBN: 978-0-87421-949-4

By Leslie Valley, Eastern Kentucky University

Heidi McKee and Danielle DeVoss’s 2013 digital book, Digital Writing Assessment and Evaluation (DWAE), offers theoretical and practical approaches to understanding the assessment challenges posed by digital writing. An edited collection, DWAE features a foreword by Andrea Lunsford, a preface by the editors, fourteen chapters by thirty-eight authors, and an afterword by Edward White. While the book focuses primarily on digital writing assessment in post-secondary composition education, the attention to ethics, class structure, multimodal texts, and programmatic concerns highlight key discussions in digital writing that are helpful for K-12 teachers and Writing Across the Curriculum administrators as well.

McKee and DeVoss have organized the chapters of DWAE in a practical way, first addressing the issues of fairness and privacy before moving on to discussions of classroom and programmatic implementation. In the first section, “Equity and Assessment,” Mya Poe and Angela Crow assert the importance of ethical decision-making when gathering and storing data and implementing change based on that data. Having established ethical considerations as the foundation, DWAE then delves into the more specific concerns of grading rubrics, student engagement and responsibility, e-portfolios, and program assessment.

Those looking to understand the connection between digital writing and course learning outcomes also have much to gain from DWAE. In the second and third sections, “Classroom Evaluation and Assessment” and “Multimodal Evaluation and Assessment,” the authors provide specific examples of assignments, students’ digital texts, and approaches to assessment. While they offer different frameworks for assessment, each author emphasizes the connection between assessment and assignment design, the importance of language and early discussions with students, and the necessity of contextualizing assessment. In Chapter 4, for example, Colleen Reilly and Anthony Atkins demonstrate that assessment language can be designed in such a way that it is not only understandable to students but also stimulates their motivation and engagement in the production of digital compositions. Reilly and Atkins point to a primary trait scoring approach rather than a holistic approach as a way to account for both process and product in the classroom.

In addition to classroom and assignment-specific frameworks, DWAE also offers methodologies for program assessment. In the final section, “Program Revisioning and Program Assessment,” the four chapters discuss pedagogical, institutional, and financial motivations for revising program assessment. Again, authors make the connection between assessment and pedagogy, demonstrating the benefits of digital platforms for immediate programmatic feedback on assignments, instruction, and grading rubrics that prompt immediate programmatic revision. They explore the potential of these digital platforms for rethinking program design and professional development for instructors. Specifically, Beth Brunk-Chavez and Judith Fourzan-Rice illustrate their experience with MinerWriter, a digital distribution system that has allowed University of Texas at El Paso to standardize assessment. This approach, they contend, has allowed them to bridge the disconnect between assessment and instruction by identifying students’ struggles and responding with assignment revision and professional development at the programmatic level.

McKee, DeVoss, and the authors take advantage of the digital format, linking to additional information and resources, embedding videos and screenshots, and creating non-linear chapters (see, specifically, Chapter 6 by Susan Delagrange, Ben McCorkle, and Catherine Braun). These digital affordances allow DWAE to demonstrate the full rhetorical context in which these assessment models exist, providing readers with a fuller understanding of the connections between assessment, pedagogy, and digital technologies. The advantages of the digital format are especially evident in Meredith Zoeteway, Michelle Simmons, and Jeffrey Grabill’s chapter on assessment design and civic engagement. By including hyperlinks, screenshots, videos, and diagrams, they provide a complete overview of the values, goals, materials, assignments, discussions, and assessments included in a digital writing course focused on civic engagement.

In their preface, McKee and DeVoss acknowledge what DWAE does not address: digital writing and students with disabilities and Automatic Essay Scoring (AES) (although Edward White’s afterword does foreground the need for more research on AES). Despite these absences, DWAE is a comprehensive look at digital writing assessment in a variety of contexts. Rather than offering one overarching theory of assessment, the text establishes the importance of assessment in context. The variety of contexts and proposed methodologies prompt both teachers and WPAs to consider digital writing assessment in light of their own ideological and pedagogical values and institutional settings.

Source: jwa

Review of_Digital Writing Assessment & Evaluation_by Heidi A. McKee and Danielle Nicole DeVoss, Editors

Review of McKee, H. A., & DeVoss, D. N. (Eds.). (2013). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae.

ISBN: 978-0-87421-949-4

By Leslie Valley, Eastern Kentucky University

Heidi McKee and Danielle DeVoss’s 2013 digital book, Digital Writing Assessment and Evaluation (DWAE), offers theoretical and practical approaches to understanding the assessment challenges posed by digital writing. An edited collection, DWAE features a foreword by Andrea Lunsford, a preface by the editors, fourteen chapters by thirty-eight authors, and an afterword by Edward White. While the book focuses primarily on digital writing assessment in post-secondary composition education, the attention to ethics, class structure, multimodal texts, and programmatic concerns highlight key discussions in digital writing that are helpful for K-12 teachers and Writing Across the Curriculum administrators as well.

McKee and DeVoss have organized the chapters of DWAE in a practical way, first addressing the issues of fairness and privacy before moving on to discussions of classroom and programmatic implementation. In the first section, “Equity and Assessment,” Mya Poe and Angela Crow assert the importance of ethical decision-making when gathering and storing data and implementing change based on that data. Having established ethical considerations as the foundation, DWAE then delves into the more specific concerns of grading rubrics, student engagement and responsibility, e-portfolios, and program assessment.

Those looking to understand the connection between digital writing and course learning outcomes also have much to gain from DWAE. In the second and third sections, “Classroom Evaluation and Assessment” and “Multimodal Evaluation and Assessment,” the authors provide specific examples of assignments, students’ digital texts, and approaches to assessment. While they offer different frameworks for assessment, each author emphasizes the connection between assessment and assignment design, the importance of language and early discussions with students, and the necessity of contextualizing assessment. In Chapter 4, for example, Colleen Reilly and Anthony Atkins demonstrate that assessment language can be designed in such a way that it is not only understandable to students but also stimulates their motivation and engagement in the production of digital compositions. Reilly and Atkins point to a primary trait scoring approach rather than a holistic approach as a way to account for both process and product in the classroom.

In addition to classroom and assignment-specific frameworks, DWAE also offers methodologies for program assessment. In the final section, “Program Revisioning and Program Assessment,” the four chapters discuss pedagogical, institutional, and financial motivations for revising program assessment. Again, authors make the connection between assessment and pedagogy, demonstrating the benefits of digital platforms for immediate programmatic feedback on assignments, instruction, and grading rubrics that prompt immediate programmatic revision. They explore the potential of these digital platforms for rethinking program design and professional development for instructors. Specifically, Beth Brunk-Chavez and Judith Fourzan-Rice illustrate their experience with MinerWriter, a digital distribution system that has allowed University of Texas at El Paso to standardize assessment. This approach, they contend, has allowed them to bridge the disconnect between assessment and instruction by identifying students’ struggles and responding with assignment revision and professional development at the programmatic level.

McKee, DeVoss, and the authors take advantage of the digital format, linking to additional information and resources, embedding videos and screenshots, and creating non-linear chapters (see, specifically, Chapter 6 by Susan Delagrange, Ben McCorkle, and Catherine Braun). These digital affordances allow DWAE to demonstrate the full rhetorical context in which these assessment models exist, providing readers with a fuller understanding of the connections between assessment, pedagogy, and digital technologies. The advantages of the digital format are especially evident in Meredith Zoeteway, Michelle Simmons, and Jeffrey Grabill’s chapter on assessment design and civic engagement. By including hyperlinks, screenshots, videos, and diagrams, they provide a complete overview of the values, goals, materials, assignments, discussions, and assessments included in a digital writing course focused on civic engagement.

In their preface, McKee and DeVoss acknowledge what DWAE does not address: digital writing and students with disabilities and Automatic Essay Scoring (AES) (although Edward White’s afterword does foreground the need for more research on AES). Despite these absences, DWAE is a comprehensive look at digital writing assessment in a variety of contexts. Rather than offering one overarching theory of assessment, the text establishes the importance of assessment in context. The variety of contexts and proposed methodologies prompt both teachers and WPAs to consider digital writing assessment in light of their own ideological and pedagogical values and institutional settings.

Source: jwa

Exciting news from the _Journal of Writing Assessment_

As you know, Journal of Writing Assessment was founded in 2003 by Kathleen Blake Yancey and Brian Huot as an independent journal that publishes a wide range of writing assessment scholarship from a wide range of scholars and teachers.  JWA was originally a print and subscription-based journal published by Hampton Press.  In 2011, Peggy O’Neill and Diane Kelly-Riley became editors of JWA, and moved the journal to a free online, open-source publication.  Hampton Press generously donated all of the print-based issues of JWA, and they are available for free on the site at http://journalofwritingassessment.org

Since our move online, JWA has had a great deal of traffic.  In the last year, more than 25,000 visits and more than 251,000 hits have been recorded to the JWA site.  Additionally in the past year, scholarship published by JWA has received significant attention in the Chronicle of Higher Education and Inside Higher Education.  We are indexed in ERIC, MLA, and Comppile.org.  

So we’d like to update you about exciting news at the Journal of Writing Assessment:

Beginning January 2015, Carl Whithaus of the University of California Davis will replace Peggy O’Neill as co-editor of JWA.   Carl has an extensive and impressive record as a scholar and practitioner of writing assessment.  

Carl’s appointment as co-editor will continue to position JWA as a journal that makes peer-reviewed scholarship about writing assessment accessible to a wide audience.  His expertise in automated scoring of writing and connections with the National Writing Project will greatly benefit JWA as the move to mandated assessments continue—both in the K-12 setting and in higher education.  We’re committed to publishing a wide range of scholarship that can inform the quickly changing landscape of writing assessment in educational settings.

Additionally, our associate editor, Jessica Nastal-Dema will continue in her role with JWA as she transitions to a faculty position at Georgia Southern University. 

Likewise, we continue to engage graduate students who are up and coming scholars of writing assessment in our work.  Tialitha Macklin, PhD candidate at Washington State University, continues in her Assistant Editor role, and David Bedsole and Bruce Bowles, PhD students at  Florida State University, will co-edit the JWA Reading List.

We are pleased to announce the redesign of the Journal of Writing Assessment site.  We refreshed the look, and added a search function so that the entire site (including pdfs) is searchable.  This redesign makes the excellent scholarship published by JWA much more accessible to a wider audience.  JWA is hosted and designed by Twenty Six Design.

Finally, we want to acknowledge the financial support of the University of Idaho’s College of Letters, Arts and Sciences and Department of English.  Their generous support enables JWA to remain an independent journal.

Diane Kelly-Riley, University of Idaho, and Peggy O’Neill, Loyola University Maryland, Editors

Source: jwa

Exciting news from the _Journal of Writing Assessment_

As you know, Journal of Writing Assessment was founded in 2003 by Kathleen Blake Yancey and Brian Huot as an independent journal that publishes a wide range of writing assessment scholarship from a wide range of scholars and teachers.  JWA was originally a print and subscription-based journal published by Hampton Press.  In 2011, Peggy O’Neill and Diane Kelly-Riley became editors of JWA, and moved the journal to a free online, open-source publication.  Hampton Press generously donated all of the print-based issues of JWA, and they are available for free on the site at http://journalofwritingassessment.org

Since our move online, JWA has had a great deal of traffic.  In the last year, more than 25,000 visits and more than 251,000 hits have been recorded to the JWA site.  Additionally in the past year, scholarship published by JWA has received significant attention in the Chronicle of Higher Education and Inside Higher Education.  We are indexed in ERIC, MLA, and Comppile.org.  

So we’d like to update you about exciting news at the Journal of Writing Assessment:

Beginning January 2015, Carl Whithaus of the University of California Davis will replace Peggy O’Neill as co-editor of JWA.   Carl has an extensive and impressive record as a scholar and practitioner of writing assessment.  

Carl’s appointment as co-editor will continue to position JWA as a journal that makes peer-reviewed scholarship about writing assessment accessible to a wide audience.  His expertise in automated scoring of writing and connections with the National Writing Project will greatly benefit JWA as the move to mandated assessments continue—both in the K-12 setting and in higher education.  We’re committed to publishing a wide range of scholarship that can inform the quickly changing landscape of writing assessment in educational settings.

Additionally, our associate editor, Jessica Nastal-Dema will continue in her role with JWA as she transitions to a faculty position at Georgia Southern University. 

Likewise, we continue to engage graduate students who are up and coming scholars of writing assessment in our work.  Tialitha Macklin, PhD candidate at Washington State University, continues in her Assistant Editor role, and David Bedsole and Bruce Bowles, PhD students at  Florida State University, will co-edit the JWA Reading List.

We are pleased to announce the redesign of the Journal of Writing Assessment site.  We refreshed the look, and added a search function so that the entire site (including pdfs) is searchable.  This redesign makes the excellent scholarship published by JWA much more accessible to a wider audience.  JWA is hosted and designed by Twenty Six Design.

Finally, we want to acknowledge the financial support of the University of Idaho’s College of Letters, Arts and Sciences and Department of English.  Their generous support enables JWA to remain an independent journal.

Diane Kelly-Riley, University of Idaho, and Peggy O’Neill, Loyola University Maryland, Editors

Source: jwa

Part I: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Part I: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Shermis, M., & Burstein J. (2013). Review of Handbook ofAutomated Essay Evaluation: Current Applications and New Directions. New York, NY: Routledge.

By Lori Beth De Hertogh, Washington State University

The Handbook of Automated Essay Evaluation: Current Applications and New Directions edited by Mark D. Shermis, University of Akron, and Jill Burstein, Educational Testing Service, features twenty chapters that each deals with a different aspect of automated essay evaluation (AEE). The overall purpose of the collection is to help professionals (i.e. educators, program administrators, researchers, testing specialists) working in a range of assessment contexts in K-12 and higher education better understand the capabilities of AEE. It also strives to demystify machine scoring and to highlight advances in several scoring platforms.

The collection is loosely organized into three parts. Authors of the first three chapters discuss automated essay evaluation in classroom contexts. The next section examines the workflow of various scoring engines. In the final section, authors highlight advances in automated essay evaluation. My two-part review generally follows this organizational scheme, except that I begin by examining the workflow of several scoring systems as well as platform options. I then review how several chapters describe potential uses of AEE in classroom contexts and recent developments in machine scoring.

The Handbook of Automated Essay Evaluation devotes considerable energy to explaining how scoring engines work. Matthew Schultz, director of psychometric services for Vantage Learning, describes in Chapter Six how the IntelliMetric™ engine analyzes and scores a text:

The IntelliMetric system must be ‘trained’ with a set of previously scored responses drawn from expert raters or scorers. These papers are used as a basis for the system to ‘learn’ the rubric and infer the pooled judgments of the human scorers. The IntelliMetric system internalizes the characteristics or features of the responses associated with each score point and applies this intelligence to score essays with unknown scores. (p. 89)

While the methods platforms like IntelliMetric use to determine a score are slightly different, they all employ a multistage process, which consists of four basic steps:

  •  receiving the text,
  • using natural language processing to parse text components such as structure, content, and style,
  • analyzing the text against a database of previously human- and machine-scored texts,
  • producing a score based on how the text is similar or dissimilar to previously rated texts.

In Chapter Eight, Elijah Mayfield and Carolyn Penstein Rosé, language and technology specialists at Carnegie Mellon University, demonstrate how this four-step process works by describing the workflow of LightSIDE, an open source machine scoring engine and learning tool. In doing so, they illustrate how the program is able to match or exceed “human performance nearly universally” due to its ability to track and develop large-scale aggregate data based on text data. Mayfield and Rosé argue that this feature allows LightSIDE to tackle “the technical challenges of data collection” in diverse assessment contexts (p. 130). They also emphasize that this capability can help users curate large-scale data based on error-analysis. Writing specialists can then use this information to identify areas (i.e. grammar, sentence structure, organization) where students need instructional and institutional support.

Chapter Four, “The e-rater® Automated Essay Scoring System,” provides a “description of e-rater’s features and their relevance to the writing construct” (p. 55). Authors Jill Burstein, Joel Tetreault, and Nitin Madnani, research scientists at Educational Testing Service, stress that the workflow capabilities of scoring systems like e-rater or Criterion (a platform developed by ETS) make them useful tools for providing students with immediate, relevant feedback on the grammatical and structural aspects of their writing in addition to being useful in administrative settings where access to aggregate data is critical (pp. 64-65). The authors argue that e-rater’s ability to generate a range of data make it an asset in responding to both local and national assessment requirements (p. 65).

In Chapter Nineteen, “Contrasting State-of-the-Art Automated Scoring of Essays,” authors Mark D. Shermis and Ben Hamner (Kaggle) offer readers a comparison of nine scoring engines’ responses to a variety of prompts in an effort to assess and compare the workflow and performance levels of each system, some of which include Intelligent Essay Assessor, LightSIDE, e-rater, and Project Essay Grade. This chapter may be particularly useful to individuals tasked with determining which type of automated evaluation system to adopt or replace. In addition, this chapter provides a brief guide to understanding how a variety of systems operate and an overview of “vendor variability in performance” (p. 337).
The Handbook of Automated Essay Evaluation: Current Applications and New Directions provides assessment scholars, practitioners, and writing teachers relevant information about the workflow of various scoring engines and how these systems’ functioning capabilities can be applied to a range of educational settings. By understanding how these systems work and their potential applications, individuals tasked with writing assessment can make more informed choices about the potential benefits and consequences of adopting automated essay evaluation.

JWA at RNF and CCCC in Indianapolis!

 

JWA will be at the Research Network Forum and CCCC in Indianapolis, March 19-22, 2014.

JWA will be at the Editors’ Roundtable discussion on Wednesday, March 19, 2014 from 1:15-2:30 pm.

If you would like to talk to someone from JWA about a potential project, you can reach Peggy O’Neill at poneill1 [at] loyola [dot] edu or you can contact Jessica Nastal-Dema at jlnastal [at] uwm [dot] edu.

See you there!

Review of _Building Writing Center Assessments that Matter_ by Ellen Schendel and William Macauley

Review of Building Writing Center Assessments that Matter by Ellen Schendel and William Macauley (2012). Utah State University Press.

ISBN 978-0-87421-816-9, paper $28.95; ISBN 978-0-87421-834-3 e-book $22.95

By Marc Scott, Shawnee State University

Ellen Schendel and William Macauley’s 2012 book, Building Writing Center Assessments that Matter (Building), is a co-authored text featuring an introduction and coda by both authors, three chapters authored by Macauley, three by Schendel, a brief interchapter by Neal Lerner, and an afterward by Brian Huot and Nicole Caswell. Much of Building explores how important writing assessment scholarship can apply to writing center program assessment, and often uses specific examples from the authors’ experiences directing writing centers. Schendel and Macauley’s goal in writing Building is to provide Writing Center Directors (WCDs) new to program assessment with a text that speaks specifically to the unique needs and opportunities of writing center work. While the text is geared toward assisting WCDs navigate program assessment, Building also provides assessment scholars and practitioners with important ideas and concepts for program assessment, including how to frame assessment and how to think through methodological options.

Those wishing to develop a culture of assessment at their institution can learn much from Schendel and Macauley’s text. Throughout Building, the authors use tutoring and writing processes as metaphors for assessment work. Just as writers gain invaluable insights by sharing their work with other writers, sharing assessment projects and data with peers only benefits writing assessment. Furthermore, in Writing Center scholarship and practice, tutors strive to help a writer establish a healthy writing process rather than just proofread or edit a text. When applied to writing assessment, a similar emphasis on process over product might help instructors and students engage in assessment as a reciprocal and recursive form of inquiry that improves the writer holistically, rather than a linear process with one correct approach for each context (p. xix). In addition, the assessment process—much like the writing process—benefits from careful attention to exigency, context, purpose, and audience. Using the recursion of writing processes and the context-sensitive nature of tutoring as metaphors for assessment may provide an accessible concept for colleagues reluctant to embrace assessment.

Writing assessment practitioners can also benefit from Building’s discussion of assessment methodologies. Schendel describes how Writing Center Directors should work to connect a program assessment’s methodology with each specific project’s purpose, audience, and available data. In fact, Schendel provides a useful chart that describes different forms of data a WCD might collect and explains how the data might be collected and who might collaborate in such efforts (pp. 127-131). The design of a writing assessment—be it a placement exam, a portfolio program, or a classroom assessment technique—should take the assessment’s context and purpose into account at each stage of the process, not just in analyzing results. Rather, a writing assessment should be sensitive to the context of the student and classroom. Neal Lerner’s brief interchapter helps WCDs understand how qualitative and quantitative assessment methodologies might impact assessment projects in writing centers, and his thoughts can also help persuade those reluctant to assess. He argues against “maintaining the status quo” and operating on only a “felt sense” of the work done in Writing Centers (p. 113). Classroom teachers and WPAs might also feel like they “know” their classrooms, but unless they can provide evidence through assessment for what they know, their claims will fail to persuade important stakeholders.

 

     Building, while effectively tailored to the needs of WCDs, provides assessment scholars and practitioners with useful metaphors for discussing assessment and a thoughtful discussion of assessment methodologies. The bulk of the text provides important information for those interested in programmatic assessment, but it does so by thoughtfully weaving together assessment scholarship in a way relevant to writing centers.

JWA at WRAB 2014 in Paris, France

The Journal of Writing Assessment will be at the upcoming Writing Research Across Borders conference in Paris, France, February 19-22, 2014.

If you will be there and would like to talk with Diane Kelly-Riley, co-editor of JWA, please email her at dianek [at] uidaho [dot] edu.  We welcome WRAB presenters to adapt their writing assessment focused presentations for publication consideration to the Journal of Writing Assessment.  Presenters can find JWA submission information here.