JWA at CCCC 15 in Tampa!

Do you have an idea for a manuscript related to writing assessment?  Are you interested in reviewing something for the JWA Reading List?  We would like to talk with you!

Journal of Writing Assessment‘s entire editorial team will be at the upcoming Conference on College Composition and Communication in Tampa, Florida March 17-22, 2015, and we’d love to talk with you about your ideas.

You can email us at journalofwritingassessment@gmail.com to set up an appointment, or we can hope to run in to you at the conference.

Safe travels and see you there!

–Diane, Carl, Jessica, Ti, David and Bruce

Source: jwa

Technology as Teacher: A Review of Genre-based Automated Writing Evaluation for L2 Research Writing

Technology as Teacher:  A Review of Genre-based Automated Writing Evaluation for L2 Research Writing


by Karen R. Tellez-Trujillo, New Mexico State University

In Genre-based automated writing evaluation for L2 research writing, Elena Cotos provides a broad overview of theoretical and operational frameworks that reinforce research writing pedagogy for second language (L2) writers. Her audience is vast: teachers of research writing, researchers, developers of intelligent writing technologies, and scholars. The book relies on empirical evidence and theoretical discussions as it advocates the development of an Automated Writing Evaluation (AWE) program, defined by Cotos as a technology used to “complement instruction with computerized affordances that are otherwise unavailable or extremely time and labor-intensive” (p. 5). Through formative assessment of graduate student writing and discipline-specific feedback produced by a scaffolded computer-assisted learning environment (Harasim, 2012), Cotos presents genre-based approaches to academic writing for research writers and L2 research writers. The AWE presented by Cotos is an L2 research writing program that includes a model for designing and evaluating a corpus and genre-based Automated Writing Evaluation (AWE) technology prototype.  She closely considers the research writing needs of novice scholars, uses a mixed methodological approach for empirical evaluation of the Computer-Assisted Language Learning (CALL) materials, and presents a sound resource for educators interested in exploring learning technologies that address writing challenges faced by L2 graduate student writers.

Genre-based automated writing evaluation for L2 research writing is well organized and comprehensive in its design. Seven chapters presented in two parts include sections on learning and teaching challenges in linguistics and rhetoric, automated writing evaluation, and conceptualization and prototyping of genre-based AWE. The second half of the book assesses the implementation and evaluation of genre-based AWE for L2 research writing. Cotos explores and evaluates the Intelligent Academic Discourse Evaluator (IADE) prototype she developed, and later discusses the cognitive, socio-disciplinary dimension and learning experiences for students who use the IADE. The analysis engine within Cotos’ IADE can be trained through John Swales’ move schema – establishing a territory, establishing a niche, and occupying a niche –which can lead to the identification of textual rhetorical structures. As a result, the IADE can provide feedback to students on their rhetorical structure, as well as information regarding the distribution of moves in their text in comparison to other moves presented in the student’s discipline. Cotos concludes by introducing the Research Writing Tutor (RWT), an extended “full-fledged corpus-based AWE program for L2 research writing pedagogy” (p. 214) capable of providing discipline-specific feedback attentive to the conventions of the genre for which the student is writing.

This book is ambitious: It provides an extensive list of figures and tables and an overview of technologies designed to assist students with writing challenges as it discusses and evaluates the design of the IADE system. A strength of this book is Cotos’ acknowledgement that research writers ought to write “like an authoritative member of the discourse community” (Boote & Beile, 2005, p. 18). Doing so is dependent upon the writer’s understanding of the genre for which they are writing and the expectations of the discourse communities to which they belong. Cotos emphasizes in the introduction, “For graduate students as aspiring scholars, research writing is also the foundation of their academic career and of the credibility of the scholarly dossier” (p. 2), thus strengthening the necessity for a graduate student to gain credibility and develop skills for effective written communication within their disciplinary community. 

A point of consideration in this book is the controversial AWE technology. Enthusiasts have looked to it as “a silver bullet” or simple solution perceived to be immediately successful when applied to language and literacy development (Warschauer & Ware, 2006, p. 175). Cotos explains the possibility of AWE’s use in the writing classroom and in L2 research writing if “conceptualized at the earliest design stages” (p. 40) and not as a quick cure for a fundamental problem. Numerous examples of Automated Essay Scoring (AES) technologies are discussed in support of the author’s argument that AWE technology can serve the student well when discipline specific genre and validity concerns are addressed. Various instructional tools such as timely formative and summative feedback, data-analysis and reporting features, and teacher accessibility for setting of parameters are provided by AWE and, Cotos argues, can be utilized with less time and effort than when done free of technology.  

Automated evaluation of writing remains controversial based on the lack of the “human factor.” Cotos notes the drawbacks and apprehensions about AWE and speaks to issues that have come to surface through research, recognizing that without attention to these issues, technology will suffer when put into action. And while the author’s attention to the drawbacks of AWE is admirable, leading scholar-teachers in Rhetoric and Composition/Writing Studies remain unconvinced that technology – no matter now sophisticated – can replace a human.

Paul Deane et al. (2013) suggested in their article, “Automated essay scoring in innovative assessments of writing from sources,” that human raters are needed to evaluate more complex factors such as critical reasoning, strength of evidence, or accuracy of information. Further, Brent Bridgman et al. (2012) presented scores on two high-stakes timed essay tests that use ETS’s e-rater® software: the Test of English as a Foreign Language (TOEFL) iBT and the Graduate Record Exam (GRE) in their article, “Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country.” The study revealed that e-rater scored writing by Chinese and Korean speakers more highly than did human raters, but gave lower scores to writing by Arabic, Hindi, and Spanish speakers. The authors hypothesized that these scoring differentials are attributable to stylistic differences human readers often accommodate but e-rater does not, and that some of these differences may be cultural rather than linguistic (Elliot, et al. 2013). Elliot et al. in “Uses and limitations of automated writing evaluation software” reminded readers that computational methods of assessing writing rely on cognitive and psychological models of language processing that can be at odds with theoretical understandings of writing as a rhetorically complex and socially embedded process that varies based on context and audience (2013).

Implications 
In addition to the application of Swales’ moves schema, one of the themes of this book is reflection. Regardless of the specific method used to evaluate student writing, Cotos emphasizes the need for writers to continually reflect upon their writing moves and practices.  She not only addresses the practice of graduate student research writing, but also works with three recursive processes to include planning, translating and reviewing, in addition to knowledge sub-stages (Flower et al., 1986; Hayes et al., 1987). While student writing is analyzed for moves by the IADE, the student is engaging in recursive practices and is improving metacognitive awareness of their writing. It is possible that L2 students utilizing the IADE will produce quality writing as a result of detailed feedback given by the IADE, however, outside the hands of the composition teacher style and cultural factors cannot be considered, thus providing the students incomplete feedback. 

Whether used as an introduction to the various types of intelligent writing technologies or for the sake of research, Genre-based automated writing evaluation for L2 research writing is an ideal resource for teachers and researchers in search of an instrument to aid students in learning to write into their academic discourse communities. All too often in English departments and composition programs, L2 writing issues are left to TESOL specialists. This book helps all teachers of composition recognize that it is within their means to aid all students – including L2 students – with scholarly writing. 

References
Boote, D.N., & Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation. Educational Researcher, 34(6).
Bridgeman, B., Trapani, C., and Attali, Y. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27-40.
Deane, P., Fowles, M., Baldwin, D., & Persky, H. (2011). The CBAL summative writing assessment: A draft eighth-grade design (Research Memorandum 11-01). Princeton, NJ: Educational Testing Service.
Elliot, N. Gere, A.R., Gibson, G., Toth, C., Whithaus, C., Presswood, A. (2013). Uses and Limitations of Automated Writing Evaluation Software, WPA-CompPile Research Bibiliographies.  WPA-CompPile Research Bibliographies, 23.  
Flower, L., Hayes, J.R., Carey, L., Schriver, K., & Stratman, J. (1986). Detection, diagnosis and the strategies of revision. College Composition and Communication, 37, 16-55.
Harasim, L. (2012). Learning theory and online technologies. New York: Routledge.
Hayes, J. R., Flower, L., Schriver, K.A., Stratman, J.F., & Carey, L. (1987). Cognitive processes in revision. In S. Rosenberg (ed.), Advances in applied psycholinguistics (Vol. 2, pp. 176 241). New York, NY: Cambridge University Press.
Warschauer, M. & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10(2), 1-4. 


Source: jwa

Technology as Teacher: A Review of Genre-based Automated Writing Evaluation for L2 Research Writing

Technology as Teacher:  A Review of Genre-based Automated Writing Evaluation for L2 Research Writing


by Karen R. Tellez-Trujillo, New Mexico State University

In Genre-based automated writing evaluation for L2 research writing, Elena Cotos provides a broad overview of theoretical and operational frameworks that reinforce research writing pedagogy for second language (L2) writers. Her audience is vast: teachers of research writing, researchers, developers of intelligent writing technologies, and scholars. The book relies on empirical evidence and theoretical discussions as it advocates the development of an Automated Writing Evaluation (AWE) program, defined by Cotos as a technology used to “complement instruction with computerized affordances that are otherwise unavailable or extremely time and labor-intensive” (p. 5). Through formative assessment of graduate student writing and discipline-specific feedback produced by a scaffolded computer-assisted learning environment (Harasim, 2012), Cotos presents genre-based approaches to academic writing for research writers and L2 research writers. The AWE presented by Cotos is an L2 research writing program that includes a model for designing and evaluating a corpus and genre-based Automated Writing Evaluation (AWE) technology prototype.  She closely considers the research writing needs of novice scholars, uses a mixed methodological approach for empirical evaluation of the Computer-Assisted Language Learning (CALL) materials, and presents a sound resource for educators interested in exploring learning technologies that address writing challenges faced by L2 graduate student writers.

Genre-based automated writing evaluation for L2 research writing is well organized and comprehensive in its design. Seven chapters presented in two parts include sections on learning and teaching challenges in linguistics and rhetoric, automated writing evaluation, and conceptualization and prototyping of genre-based AWE. The second half of the book assesses the implementation and evaluation of genre-based AWE for L2 research writing. Cotos explores and evaluates the Intelligent Academic Discourse Evaluator (IADE) prototype she developed, and later discusses the cognitive, socio-disciplinary dimension and learning experiences for students who use the IADE. The analysis engine within Cotos’ IADE can be trained through John Swales’ move schema – establishing a territory, establishing a niche, and occupying a niche –which can lead to the identification of textual rhetorical structures. As a result, the IADE can provide feedback to students on their rhetorical structure, as well as information regarding the distribution of moves in their text in comparison to other moves presented in the student’s discipline. Cotos concludes by introducing the Research Writing Tutor (RWT), an extended “full-fledged corpus-based AWE program for L2 research writing pedagogy” (p. 214) capable of providing discipline-specific feedback attentive to the conventions of the genre for which the student is writing.

This book is ambitious: It provides an extensive list of figures and tables and an overview of technologies designed to assist students with writing challenges as it discusses and evaluates the design of the IADE system. A strength of this book is Cotos’ acknowledgement that research writers ought to write “like an authoritative member of the discourse community” (Boote & Beile, 2005, p. 18). Doing so is dependent upon the writer’s understanding of the genre for which they are writing and the expectations of the discourse communities to which they belong. Cotos emphasizes in the introduction, “For graduate students as aspiring scholars, research writing is also the foundation of their academic career and of the credibility of the scholarly dossier” (p. 2), thus strengthening the necessity for a graduate student to gain credibility and develop skills for effective written communication within their disciplinary community. 

A point of consideration in this book is the controversial AWE technology. Enthusiasts have looked to it as “a silver bullet” or simple solution perceived to be immediately successful when applied to language and literacy development (Warschauer & Ware, 2006, p. 175). Cotos explains the possibility of AWE’s use in the writing classroom and in L2 research writing if “conceptualized at the earliest design stages” (p. 40) and not as a quick cure for a fundamental problem. Numerous examples of Automated Essay Scoring (AES) technologies are discussed in support of the author’s argument that AWE technology can serve the student well when discipline specific genre and validity concerns are addressed. Various instructional tools such as timely formative and summative feedback, data-analysis and reporting features, and teacher accessibility for setting of parameters are provided by AWE and, Cotos argues, can be utilized with less time and effort than when done free of technology.  

Automated evaluation of writing remains controversial based on the lack of the “human factor.” Cotos notes the drawbacks and apprehensions about AWE and speaks to issues that have come to surface through research, recognizing that without attention to these issues, technology will suffer when put into action. And while the author’s attention to the drawbacks of AWE is admirable, leading scholar-teachers in Rhetoric and Composition/Writing Studies remain unconvinced that technology – no matter now sophisticated – can replace a human.

Paul Deane et al. (2013) suggested in their article, “Automated essay scoring in innovative assessments of writing from sources,” that human raters are needed to evaluate more complex factors such as critical reasoning, strength of evidence, or accuracy of information. Further, Brent Bridgman et al. (2012) presented scores on two high-stakes timed essay tests that use ETS’s e-rater® software: the Test of English as a Foreign Language (TOEFL) iBT and the Graduate Record Exam (GRE) in their article, “Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country.” The study revealed that e-rater scored writing by Chinese and Korean speakers more highly than did human raters, but gave lower scores to writing by Arabic, Hindi, and Spanish speakers. The authors hypothesized that these scoring differentials are attributable to stylistic differences human readers often accommodate but e-rater does not, and that some of these differences may be cultural rather than linguistic (Elliot, et al. 2013). Elliot et al. in “Uses and limitations of automated writing evaluation software” reminded readers that computational methods of assessing writing rely on cognitive and psychological models of language processing that can be at odds with theoretical understandings of writing as a rhetorically complex and socially embedded process that varies based on context and audience (2013).

Implications 
In addition to the application of Swales’ moves schema, one of the themes of this book is reflection. Regardless of the specific method used to evaluate student writing, Cotos emphasizes the need for writers to continually reflect upon their writing moves and practices.  She not only addresses the practice of graduate student research writing, but also works with three recursive processes to include planning, translating and reviewing, in addition to knowledge sub-stages (Flower et al., 1986; Hayes et al., 1987). While student writing is analyzed for moves by the IADE, the student is engaging in recursive practices and is improving metacognitive awareness of their writing. It is possible that L2 students utilizing the IADE will produce quality writing as a result of detailed feedback given by the IADE, however, outside the hands of the composition teacher style and cultural factors cannot be considered, thus providing the students incomplete feedback. 

Whether used as an introduction to the various types of intelligent writing technologies or for the sake of research, Genre-based automated writing evaluation for L2 research writing is an ideal resource for teachers and researchers in search of an instrument to aid students in learning to write into their academic discourse communities. All too often in English departments and composition programs, L2 writing issues are left to TESOL specialists. This book helps all teachers of composition recognize that it is within their means to aid all students – including L2 students – with scholarly writing. 

References
Boote, D.N., & Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation. Educational Researcher, 34(6).
Bridgeman, B., Trapani, C., and Attali, Y. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27-40.
Deane, P., Fowles, M., Baldwin, D., & Persky, H. (2011). The CBAL summative writing assessment: A draft eighth-grade design (Research Memorandum 11-01). Princeton, NJ: Educational Testing Service.
Elliot, N. Gere, A.R., Gibson, G., Toth, C., Whithaus, C., Presswood, A. (2013). Uses and Limitations of Automated Writing Evaluation Software, WPA-CompPile Research Bibiliographies.  WPA-CompPile Research Bibliographies, 23.  
Flower, L., Hayes, J.R., Carey, L., Schriver, K., & Stratman, J. (1986). Detection, diagnosis and the strategies of revision. College Composition and Communication, 37, 16-55.
Harasim, L. (2012). Learning theory and online technologies. New York: Routledge.
Hayes, J. R., Flower, L., Schriver, K.A., Stratman, J.F., & Carey, L. (1987). Cognitive processes in revision. In S. Rosenberg (ed.), Advances in applied psycholinguistics (Vol. 2, pp. 176 241). New York, NY: Cambridge University Press.
Warschauer, M. & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10(2), 1-4. 


Source: jwa

Call for Papers: Special Issue of JWA on the Common Core State Standards Assessments

Call for Papers
Special Issue of Journal of Writing Assessment
The Common Core State Standards Assessments
The Journal of Writing Assessment is interested in scholars’ and teachers’ responses to the writing assessments connected with the implementation of the Common Core State Standards assessments. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the writing assessments, including approach, use of technology, and sample items.
The assessments were piloted in 2013-14, and are being implemented in most participating states during the 2014-15 academic year. Both SBAC and PARCC are approving and releasing achievement levels based on student performance on the pilot assessments. The SBAC (http://www.smarterbalanced.org/) and PARCC (http://www.parcconline.org/) assessment instruments are reshaping the assessment of—and potentially the teaching and learning of—writing in elementary and secondary education in many states. These assessments are defining and measuring the writing skills students need for “college-and-career readiness.” This enterprise is one of the largest-scale writing assessment projects ever undertaken in the United States. Researchers need to evaluate not only the validity and reliability of these assessment instruments, but also their impacts on teaching and learning.
The Journal of Writing Assessment seeks articles that examine:
  • Theoretical stances behind the Common Core State Standards assessments,
  • Development processes for the CCSS assessment instruments,
  • Implementation of the assessments, and
  • Impacts of these assessments on writing curricula and instruction at the classroom, district, and/or state levels.

We are interested in manuscripts that explore the CCSS assessments from a variety of viewpoints including, but not limited to, empirical, historical, theoretical, qualitative, experiential and quantitative perspectives.
For inclusion in JWA 8.1, proposals (200-400 words) are due by Feb. 27, 2015 to the JWA Submission page. Full drafts of articles are due by May 31, 2015. As accepted manuscripts are developed, please follow JWA‘s guidelines for submission. Queries may be addressed to the JWA editors, Diane Kelly-Riley and Carl Whithaus, at journalofwritingassessment@gmail.com.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
For more information, visit JWA online http://www.journalofwritingassessment.org/.

Source: jwa

Call for Papers: Special Issue of JWA on the Common Core State Standards Assessments

Call for Papers
Special Issue of Journal of Writing Assessment
The Common Core State Standards Assessments
The Journal of Writing Assessment is interested in scholars’ and teachers’ responses to the writing assessments connected with the implementation of the Common Core State Standards assessments. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the writing assessments, including approach, use of technology, and sample items.
The assessments were piloted in 2013-14, and are being implemented in most participating states during the 2014-15 academic year. Both SBAC and PARCC are approving and releasing achievement levels based on student performance on the pilot assessments. The SBAC (http://www.smarterbalanced.org/) and PARCC (http://www.parcconline.org/) assessment instruments are reshaping the assessment of—and potentially the teaching and learning of—writing in elementary and secondary education in many states. These assessments are defining and measuring the writing skills students need for “college-and-career readiness.” This enterprise is one of the largest-scale writing assessment projects ever undertaken in the United States. Researchers need to evaluate not only the validity and reliability of these assessment instruments, but also their impacts on teaching and learning.
The Journal of Writing Assessment seeks articles that examine:
  • Theoretical stances behind the Common Core State Standards assessments,
  • Development processes for the CCSS assessment instruments,
  • Implementation of the assessments, and
  • Impacts of these assessments on writing curricula and instruction at the classroom, district, and/or state levels.

We are interested in manuscripts that explore the CCSS assessments from a variety of viewpoints including, but not limited to, empirical, historical, theoretical, qualitative, experiential and quantitative perspectives.
For inclusion in JWA 8.1, proposals (200-400 words) are due by Feb. 27, 2015 to the JWA Submission page. Full drafts of articles are due by May 31, 2015. As accepted manuscripts are developed, please follow JWA‘s guidelines for submission. Queries may be addressed to the JWA editors, Diane Kelly-Riley and Carl Whithaus, at journalofwritingassessment@gmail.com.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
For more information, visit JWA online http://www.journalofwritingassessment.org/.

Source: jwa

Review of_Digital Writing Assessment & Evaluation_by Heidi A. McKee and Danielle Nicole DeVoss, Editors

Review of McKee, H. A., & DeVoss, D. N. (Eds.). (2013). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae.

ISBN: 978-0-87421-949-4

By Leslie Valley, Eastern Kentucky University

Heidi McKee and Danielle DeVoss’s 2013 digital book, Digital Writing Assessment and Evaluation (DWAE), offers theoretical and practical approaches to understanding the assessment challenges posed by digital writing. An edited collection, DWAE features a foreword by Andrea Lunsford, a preface by the editors, fourteen chapters by thirty-eight authors, and an afterword by Edward White. While the book focuses primarily on digital writing assessment in post-secondary composition education, the attention to ethics, class structure, multimodal texts, and programmatic concerns highlight key discussions in digital writing that are helpful for K-12 teachers and Writing Across the Curriculum administrators as well.

McKee and DeVoss have organized the chapters of DWAE in a practical way, first addressing the issues of fairness and privacy before moving on to discussions of classroom and programmatic implementation. In the first section, “Equity and Assessment,” Mya Poe and Angela Crow assert the importance of ethical decision-making when gathering and storing data and implementing change based on that data. Having established ethical considerations as the foundation, DWAE then delves into the more specific concerns of grading rubrics, student engagement and responsibility, e-portfolios, and program assessment.

Those looking to understand the connection between digital writing and course learning outcomes also have much to gain from DWAE. In the second and third sections, “Classroom Evaluation and Assessment” and “Multimodal Evaluation and Assessment,” the authors provide specific examples of assignments, students’ digital texts, and approaches to assessment. While they offer different frameworks for assessment, each author emphasizes the connection between assessment and assignment design, the importance of language and early discussions with students, and the necessity of contextualizing assessment. In Chapter 4, for example, Colleen Reilly and Anthony Atkins demonstrate that assessment language can be designed in such a way that it is not only understandable to students but also stimulates their motivation and engagement in the production of digital compositions. Reilly and Atkins point to a primary trait scoring approach rather than a holistic approach as a way to account for both process and product in the classroom.

In addition to classroom and assignment-specific frameworks, DWAE also offers methodologies for program assessment. In the final section, “Program Revisioning and Program Assessment,” the four chapters discuss pedagogical, institutional, and financial motivations for revising program assessment. Again, authors make the connection between assessment and pedagogy, demonstrating the benefits of digital platforms for immediate programmatic feedback on assignments, instruction, and grading rubrics that prompt immediate programmatic revision. They explore the potential of these digital platforms for rethinking program design and professional development for instructors. Specifically, Beth Brunk-Chavez and Judith Fourzan-Rice illustrate their experience with MinerWriter, a digital distribution system that has allowed University of Texas at El Paso to standardize assessment. This approach, they contend, has allowed them to bridge the disconnect between assessment and instruction by identifying students’ struggles and responding with assignment revision and professional development at the programmatic level.

McKee, DeVoss, and the authors take advantage of the digital format, linking to additional information and resources, embedding videos and screenshots, and creating non-linear chapters (see, specifically, Chapter 6 by Susan Delagrange, Ben McCorkle, and Catherine Braun). These digital affordances allow DWAE to demonstrate the full rhetorical context in which these assessment models exist, providing readers with a fuller understanding of the connections between assessment, pedagogy, and digital technologies. The advantages of the digital format are especially evident in Meredith Zoeteway, Michelle Simmons, and Jeffrey Grabill’s chapter on assessment design and civic engagement. By including hyperlinks, screenshots, videos, and diagrams, they provide a complete overview of the values, goals, materials, assignments, discussions, and assessments included in a digital writing course focused on civic engagement.

In their preface, McKee and DeVoss acknowledge what DWAE does not address: digital writing and students with disabilities and Automatic Essay Scoring (AES) (although Edward White’s afterword does foreground the need for more research on AES). Despite these absences, DWAE is a comprehensive look at digital writing assessment in a variety of contexts. Rather than offering one overarching theory of assessment, the text establishes the importance of assessment in context. The variety of contexts and proposed methodologies prompt both teachers and WPAs to consider digital writing assessment in light of their own ideological and pedagogical values and institutional settings.

Source: jwa

Review of_Digital Writing Assessment & Evaluation_by Heidi A. McKee and Danielle Nicole DeVoss, Editors

Review of McKee, H. A., & DeVoss, D. N. (Eds.). (2013). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae.

ISBN: 978-0-87421-949-4

By Leslie Valley, Eastern Kentucky University

Heidi McKee and Danielle DeVoss’s 2013 digital book, Digital Writing Assessment and Evaluation (DWAE), offers theoretical and practical approaches to understanding the assessment challenges posed by digital writing. An edited collection, DWAE features a foreword by Andrea Lunsford, a preface by the editors, fourteen chapters by thirty-eight authors, and an afterword by Edward White. While the book focuses primarily on digital writing assessment in post-secondary composition education, the attention to ethics, class structure, multimodal texts, and programmatic concerns highlight key discussions in digital writing that are helpful for K-12 teachers and Writing Across the Curriculum administrators as well.

McKee and DeVoss have organized the chapters of DWAE in a practical way, first addressing the issues of fairness and privacy before moving on to discussions of classroom and programmatic implementation. In the first section, “Equity and Assessment,” Mya Poe and Angela Crow assert the importance of ethical decision-making when gathering and storing data and implementing change based on that data. Having established ethical considerations as the foundation, DWAE then delves into the more specific concerns of grading rubrics, student engagement and responsibility, e-portfolios, and program assessment.

Those looking to understand the connection between digital writing and course learning outcomes also have much to gain from DWAE. In the second and third sections, “Classroom Evaluation and Assessment” and “Multimodal Evaluation and Assessment,” the authors provide specific examples of assignments, students’ digital texts, and approaches to assessment. While they offer different frameworks for assessment, each author emphasizes the connection between assessment and assignment design, the importance of language and early discussions with students, and the necessity of contextualizing assessment. In Chapter 4, for example, Colleen Reilly and Anthony Atkins demonstrate that assessment language can be designed in such a way that it is not only understandable to students but also stimulates their motivation and engagement in the production of digital compositions. Reilly and Atkins point to a primary trait scoring approach rather than a holistic approach as a way to account for both process and product in the classroom.

In addition to classroom and assignment-specific frameworks, DWAE also offers methodologies for program assessment. In the final section, “Program Revisioning and Program Assessment,” the four chapters discuss pedagogical, institutional, and financial motivations for revising program assessment. Again, authors make the connection between assessment and pedagogy, demonstrating the benefits of digital platforms for immediate programmatic feedback on assignments, instruction, and grading rubrics that prompt immediate programmatic revision. They explore the potential of these digital platforms for rethinking program design and professional development for instructors. Specifically, Beth Brunk-Chavez and Judith Fourzan-Rice illustrate their experience with MinerWriter, a digital distribution system that has allowed University of Texas at El Paso to standardize assessment. This approach, they contend, has allowed them to bridge the disconnect between assessment and instruction by identifying students’ struggles and responding with assignment revision and professional development at the programmatic level.

McKee, DeVoss, and the authors take advantage of the digital format, linking to additional information and resources, embedding videos and screenshots, and creating non-linear chapters (see, specifically, Chapter 6 by Susan Delagrange, Ben McCorkle, and Catherine Braun). These digital affordances allow DWAE to demonstrate the full rhetorical context in which these assessment models exist, providing readers with a fuller understanding of the connections between assessment, pedagogy, and digital technologies. The advantages of the digital format are especially evident in Meredith Zoeteway, Michelle Simmons, and Jeffrey Grabill’s chapter on assessment design and civic engagement. By including hyperlinks, screenshots, videos, and diagrams, they provide a complete overview of the values, goals, materials, assignments, discussions, and assessments included in a digital writing course focused on civic engagement.

In their preface, McKee and DeVoss acknowledge what DWAE does not address: digital writing and students with disabilities and Automatic Essay Scoring (AES) (although Edward White’s afterword does foreground the need for more research on AES). Despite these absences, DWAE is a comprehensive look at digital writing assessment in a variety of contexts. Rather than offering one overarching theory of assessment, the text establishes the importance of assessment in context. The variety of contexts and proposed methodologies prompt both teachers and WPAs to consider digital writing assessment in light of their own ideological and pedagogical values and institutional settings.

Source: jwa

Exciting news from the _Journal of Writing Assessment_

As you know, Journal of Writing Assessment was founded in 2003 by Kathleen Blake Yancey and Brian Huot as an independent journal that publishes a wide range of writing assessment scholarship from a wide range of scholars and teachers.  JWA was originally a print and subscription-based journal published by Hampton Press.  In 2011, Peggy O’Neill and Diane Kelly-Riley became editors of JWA, and moved the journal to a free online, open-source publication.  Hampton Press generously donated all of the print-based issues of JWA, and they are available for free on the site at http://journalofwritingassessment.org

Since our move online, JWA has had a great deal of traffic.  In the last year, more than 25,000 visits and more than 251,000 hits have been recorded to the JWA site.  Additionally in the past year, scholarship published by JWA has received significant attention in the Chronicle of Higher Education and Inside Higher Education.  We are indexed in ERIC, MLA, and Comppile.org.  

So we’d like to update you about exciting news at the Journal of Writing Assessment:

Beginning January 2015, Carl Whithaus of the University of California Davis will replace Peggy O’Neill as co-editor of JWA.   Carl has an extensive and impressive record as a scholar and practitioner of writing assessment.  

Carl’s appointment as co-editor will continue to position JWA as a journal that makes peer-reviewed scholarship about writing assessment accessible to a wide audience.  His expertise in automated scoring of writing and connections with the National Writing Project will greatly benefit JWA as the move to mandated assessments continue—both in the K-12 setting and in higher education.  We’re committed to publishing a wide range of scholarship that can inform the quickly changing landscape of writing assessment in educational settings.

Additionally, our associate editor, Jessica Nastal-Dema will continue in her role with JWA as she transitions to a faculty position at Georgia Southern University. 

Likewise, we continue to engage graduate students who are up and coming scholars of writing assessment in our work.  Tialitha Macklin, PhD candidate at Washington State University, continues in her Assistant Editor role, and David Bedsole and Bruce Bowles, PhD students at  Florida State University, will co-edit the JWA Reading List.

We are pleased to announce the redesign of the Journal of Writing Assessment site.  We refreshed the look, and added a search function so that the entire site (including pdfs) is searchable.  This redesign makes the excellent scholarship published by JWA much more accessible to a wider audience.  JWA is hosted and designed by Twenty Six Design.

Finally, we want to acknowledge the financial support of the University of Idaho’s College of Letters, Arts and Sciences and Department of English.  Their generous support enables JWA to remain an independent journal.

Diane Kelly-Riley, University of Idaho, and Peggy O’Neill, Loyola University Maryland, Editors

Source: jwa

Exciting news from the _Journal of Writing Assessment_

As you know, Journal of Writing Assessment was founded in 2003 by Kathleen Blake Yancey and Brian Huot as an independent journal that publishes a wide range of writing assessment scholarship from a wide range of scholars and teachers.  JWA was originally a print and subscription-based journal published by Hampton Press.  In 2011, Peggy O’Neill and Diane Kelly-Riley became editors of JWA, and moved the journal to a free online, open-source publication.  Hampton Press generously donated all of the print-based issues of JWA, and they are available for free on the site at http://journalofwritingassessment.org

Since our move online, JWA has had a great deal of traffic.  In the last year, more than 25,000 visits and more than 251,000 hits have been recorded to the JWA site.  Additionally in the past year, scholarship published by JWA has received significant attention in the Chronicle of Higher Education and Inside Higher Education.  We are indexed in ERIC, MLA, and Comppile.org.  

So we’d like to update you about exciting news at the Journal of Writing Assessment:

Beginning January 2015, Carl Whithaus of the University of California Davis will replace Peggy O’Neill as co-editor of JWA.   Carl has an extensive and impressive record as a scholar and practitioner of writing assessment.  

Carl’s appointment as co-editor will continue to position JWA as a journal that makes peer-reviewed scholarship about writing assessment accessible to a wide audience.  His expertise in automated scoring of writing and connections with the National Writing Project will greatly benefit JWA as the move to mandated assessments continue—both in the K-12 setting and in higher education.  We’re committed to publishing a wide range of scholarship that can inform the quickly changing landscape of writing assessment in educational settings.

Additionally, our associate editor, Jessica Nastal-Dema will continue in her role with JWA as she transitions to a faculty position at Georgia Southern University. 

Likewise, we continue to engage graduate students who are up and coming scholars of writing assessment in our work.  Tialitha Macklin, PhD candidate at Washington State University, continues in her Assistant Editor role, and David Bedsole and Bruce Bowles, PhD students at  Florida State University, will co-edit the JWA Reading List.

We are pleased to announce the redesign of the Journal of Writing Assessment site.  We refreshed the look, and added a search function so that the entire site (including pdfs) is searchable.  This redesign makes the excellent scholarship published by JWA much more accessible to a wider audience.  JWA is hosted and designed by Twenty Six Design.

Finally, we want to acknowledge the financial support of the University of Idaho’s College of Letters, Arts and Sciences and Department of English.  Their generous support enables JWA to remain an independent journal.

Diane Kelly-Riley, University of Idaho, and Peggy O’Neill, Loyola University Maryland, Editors

Source: jwa

Part I: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Part I: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Shermis, M., & Burstein J. (2013). Review of Handbook ofAutomated Essay Evaluation: Current Applications and New Directions. New York, NY: Routledge.

By Lori Beth De Hertogh, Washington State University

The Handbook of Automated Essay Evaluation: Current Applications and New Directions edited by Mark D. Shermis, University of Akron, and Jill Burstein, Educational Testing Service, features twenty chapters that each deals with a different aspect of automated essay evaluation (AEE). The overall purpose of the collection is to help professionals (i.e. educators, program administrators, researchers, testing specialists) working in a range of assessment contexts in K-12 and higher education better understand the capabilities of AEE. It also strives to demystify machine scoring and to highlight advances in several scoring platforms.

The collection is loosely organized into three parts. Authors of the first three chapters discuss automated essay evaluation in classroom contexts. The next section examines the workflow of various scoring engines. In the final section, authors highlight advances in automated essay evaluation. My two-part review generally follows this organizational scheme, except that I begin by examining the workflow of several scoring systems as well as platform options. I then review how several chapters describe potential uses of AEE in classroom contexts and recent developments in machine scoring.

The Handbook of Automated Essay Evaluation devotes considerable energy to explaining how scoring engines work. Matthew Schultz, director of psychometric services for Vantage Learning, describes in Chapter Six how the IntelliMetric™ engine analyzes and scores a text:

The IntelliMetric system must be ‘trained’ with a set of previously scored responses drawn from expert raters or scorers. These papers are used as a basis for the system to ‘learn’ the rubric and infer the pooled judgments of the human scorers. The IntelliMetric system internalizes the characteristics or features of the responses associated with each score point and applies this intelligence to score essays with unknown scores. (p. 89)

While the methods platforms like IntelliMetric use to determine a score are slightly different, they all employ a multistage process, which consists of four basic steps:

  •  receiving the text,
  • using natural language processing to parse text components such as structure, content, and style,
  • analyzing the text against a database of previously human- and machine-scored texts,
  • producing a score based on how the text is similar or dissimilar to previously rated texts.

In Chapter Eight, Elijah Mayfield and Carolyn Penstein Rosé, language and technology specialists at Carnegie Mellon University, demonstrate how this four-step process works by describing the workflow of LightSIDE, an open source machine scoring engine and learning tool. In doing so, they illustrate how the program is able to match or exceed “human performance nearly universally” due to its ability to track and develop large-scale aggregate data based on text data. Mayfield and Rosé argue that this feature allows LightSIDE to tackle “the technical challenges of data collection” in diverse assessment contexts (p. 130). They also emphasize that this capability can help users curate large-scale data based on error-analysis. Writing specialists can then use this information to identify areas (i.e. grammar, sentence structure, organization) where students need instructional and institutional support.

Chapter Four, “The e-rater® Automated Essay Scoring System,” provides a “description of e-rater’s features and their relevance to the writing construct” (p. 55). Authors Jill Burstein, Joel Tetreault, and Nitin Madnani, research scientists at Educational Testing Service, stress that the workflow capabilities of scoring systems like e-rater or Criterion (a platform developed by ETS) make them useful tools for providing students with immediate, relevant feedback on the grammatical and structural aspects of their writing in addition to being useful in administrative settings where access to aggregate data is critical (pp. 64-65). The authors argue that e-rater’s ability to generate a range of data make it an asset in responding to both local and national assessment requirements (p. 65).

In Chapter Nineteen, “Contrasting State-of-the-Art Automated Scoring of Essays,” authors Mark D. Shermis and Ben Hamner (Kaggle) offer readers a comparison of nine scoring engines’ responses to a variety of prompts in an effort to assess and compare the workflow and performance levels of each system, some of which include Intelligent Essay Assessor, LightSIDE, e-rater, and Project Essay Grade. This chapter may be particularly useful to individuals tasked with determining which type of automated evaluation system to adopt or replace. In addition, this chapter provides a brief guide to understanding how a variety of systems operate and an overview of “vendor variability in performance” (p. 337).
The Handbook of Automated Essay Evaluation: Current Applications and New Directions provides assessment scholars, practitioners, and writing teachers relevant information about the workflow of various scoring engines and how these systems’ functioning capabilities can be applied to a range of educational settings. By understanding how these systems work and their potential applications, individuals tasked with writing assessment can make more informed choices about the potential benefits and consequences of adopting automated essay evaluation.