Editor’s Introduction | Fall 2025

Greetings and welcome to the Journal of Writing Assessment’s Reading List!

We are thrilled to release our Fall 2025 Issue, following the publication of the Journal of Writing Assessment’s Volume 18, Issue 2. Along with the new issue, we are also excited to announce a new team:

  • Chris Blankenship (Salt Lake Community College)
    • Editor
    • Associate Editor, Journal of Writing Assessment
  • Alexis Teagarden (University of Massachusetts, Dartmouth)
    • Assistant Editor
  • Olivia McMurray (American River College)
    • Copy Editor

This issue of JWARL presents three reviews that cover a range of topics in assessment. The first discusses Dan Melzer’s thoughts on writing assessment modalities and guidance for improving feedback. The second examines Asao Inoue’s labor-based grading policies and ways to further increase classroom grading equity for students with disabilities and neurodivergencies. The final review looks at issues of equity and fairness in large-scale standardized testing in the K-12 system in the United States as discussed in this special issue of Educational Assessment. The reviews for this issue can be accessed here:

  • Reconstructing Response to Student Writing: A National Study from Across the Curriculum by Dan Melzer — reviewed by Issac Castillo (San Diego State University)
  • Cripping Labor-Based Grading for More Equity in Literacy Courses by Asao Inoue — reviewed by Kat M. Gray (University of Arkansas)
  • Educational Assessment’s special issue: “Fairness in educational assessment and the next edition of the Standards” with articles from Jennifer Randall, Randy Bennett, and Guillermo Solano-Flores — reviewed by Jen Daly (University of New Hampshire)

We are thankful for our reviewers’ hard work and appreciate their observations as they bring renewed attention to these texts about writing assessment.

As always, we are actively recruiting new reviewers for the Reading List, which you can join by filling out this form. To ensure the accuracy of our reviewer list, we have created a new form for the 25-26 academic year, so even if you have applied to be a reviewer, please update your information.

We’re also always interested in recommendations for new texts in writing assessment to review (self-promotion is welcome!); you can contact us at jwareadinglist@gmail.com.

Cheers!

Chris, Alexis, Olivia
jwareadinglist@gmail.com

Review of Dan Melzer’s Reconstructing Response to Student Writing: A National Study from Across the Curriculum

Reviewed by Isaac Castillo, San Diego State University

Melzer, D. (2023). Reconstructing response to student writing: a national study from across the curriculum. Utah State University Press.

After spending the better part of the year with Dan Melzer’s Reconstructing Response to Student Writing: A National Study from Across the Curriculum, I find myself recommending it to experienced instructors and graduate students entering the classroom for the first time. The book has reshaped how I think about response, challenging some of my earlier assumptions and opening new possibilities for practice. Even though Melzer does not offer a step-by-step guide for easing the labor of feedback, his attention to modality—both in theory and in application—helped me move away from older response methods that are arguably less effective. Melzer had me think about what response modalities to use and how multimodal approaches can save time while giving students a stronger sense of their own ability as writers. At the same time, Melzer (2023) raises empirical questions about how instructors position themselves when responding to student writing, which pushed me to rethink what he calls the “narrow and intimidating role as judge and jury” (p. 88). That shift allowed my students to see me less as an evaluator and more as a reader and guide, and it reflects Melzer’s central concern in the book: reimagining response in ways that place students, rather than instructors, at the center of the feedback process.

In the opening pages, the reader encounters an author concerned about his pedagogical impact. As an educator, I found myself in his reflections, contemplating “if students were paying close attention to my feedback and applying it to future drafts, and if students were able to transfer my suggestions to the writing they were doing in their other courses” (Melzer, 2023, p. 3). He lays this personal concern alongside the historical trend of writing researchers identifying instructor comments as likely to be controlling, directive, and mean, which results in students relinquishing control of and appreciation for the writing process (Brannon & Knoblauch, 1982; Sommers, 1982, as cited in Melzer, 2023, p. 12). It is Melzer’s intention to change this dynamic by putting students, not teachers, at the helm of response and assessment.

Melzer bases this argument on his review of e-portfolios from 70 U.S. colleges and universities, involving a thousand-plus drafts as well as peer and teacher responses, student reflection on feedback, and student self-assessment. This corpus notably provides insight on student perceptions and opinions about received feedback, which is important contextual information that previous studies of written comments often lacked (Melzer, 2023, p. 6). Due to the size of the corpus, Melzer has the opportunity to investigate the role students play in evaluating their peers in the feedback process, specifically the manner in which students participate in self-monitoring and reflect on their learning contextual considerations (Lee, 2014, as cited in Melzer, 2023, p. 14). Thus, in Reconstructing Response to Student Writing, Melzer’s (2023) contribution includes students’ perspectives in the research corpus and suggests how writing instructors can assess students contextually.

The second chapter is devoted to Melzer’s constructivist heuristic, which researchers can consider when studying response but can also be used by writing instructors who are designing contextualized responses to writing. Because writing instructors occupy several convoluted roles such as evaluator, educator, and audience, it can become difficult for instructors to recall the context in which students are writing. Melzer’s (2023) heuristic helps untangle roles and guide response through six questions:

  1. Who should respond?
  2. What should the response focus on?
  3. What contexts should responders consider?
  4. What type of feedback should responders give?
  5. When should a response occur?
  6. What modalities should responders use? (p. 8)

Chapters three through five are an application of the heuristic that directly addresses the relationship between the components of the heuristic and the role the student or teacher plays. 

In the third chapter, “Teacher Response to Writing,” Melzer explores how power dynamics shape response output. Melzer interprets most students as trying to meet their instructors’ expectations. Following Straub (1996), we know that instructors leave comments that are evaluative and directive and therefore dominate the revising process (as cited in Melzer, 2023, pp. 54-55). For example, Melzer (2023) describes a comment left on a student draft that displays this power dynamic: “You MUST correct your format” (p. 51). As Melzer sees it, the prescription for such a malady would be that instructors focus on metacognition and transfer for learning: “Next time, please allow time toward the end of your revision process to find your clearest presentation of your claim, and add it to the introduction” (Melzer, 2023, p. 62). In short, Melzer argues for facilitative rather than directive feedback, especially when it comes to the transfer of learning with comments that “feedforward.”

In the fourth chapter, Melzer applies his constructivist heuristic to peer review, finding that peers are less directive and use open ended questions to shape their response. Students engage with each other and provide meaningful commentary, but Melzer argues peer reviews are typically seen as a supplement to teacher commentary. Melzer stresses the importance of peer review, as he is interested in putting students at the forefront of response as opposed to the writing instructor.  Melzer (2023) notes previous studies show how peer review feedback can often be similar to teacher response (p. 90). The 419 peer responses he studied align with those prior studies. Within the corpus, Melzer finds peers provide facilitative feedback along with open-ended questions that shape their commentary. Melzer grants peer review cannot wholly replace instructor feedback but demonstrates how students can engage and provide meaningful response and argues how peer review illustrates writing as a social process, thus altering students’ perspective on the writing process as a whole.  

In the fifth chapter, Melzer (2023) shifts focus away from teachers even more by supplying evidence that students are able to “assess their own writing and meaningfully reflect on their writing habits, processes, and growth” (p. 112).  Melzer claims that students can explain their growth as writers and further argues that teachers should be concerned with a student’s self-efficacy and engage in dialogue with this self-assessment, not with the final draft. Melzer suggests teachers provide scripts that guide student’s thoughts about their writing but still allow individual students to reflect on their abilities. He also notes writing instructors can observe this growth by examining their students’ literacy histories, which provide valuable context.

Chapter six concludes that writing instruction needs to move away from the teacher-student dyad because ultimately, the student will perceive the writing course as a game to earn a grade. The goal is to empower students to have self-efficacy and continue to use self-assessment in conjunction with critical self-reflection which should be fostered early on in any writing course. Melzer ends the book with a postscript with a prescriptive measure to improve writing curriculum in higher education. 

Despite the upbeat and progressive tone of the book, Melzer is open about limitations. There are nods towards teacher bias as it relates to disability, gender, and race, but Melzer does not speculate on how his approach might mitigate ableism, sexism, or racism. Melzer also mentions several times that audio and screencast are potentially more engaging ways of conducting response, but he has so few examples of these modalities that he cannot comment on them. 

Overall, Reconstructing Response to Student Writing: A National Study from Across the Curriculum is a concise yet densely packed book that offers clear guidance for anyone looking to improve the feedback they provide to students. The constructivist heuristic is practical, and Melzer’s suggestions are doable for any writing instructor. 

Isaac Castillo is an administrative assistant in the Department of Rhetoric and Writing Studies at San Diego State University, where he is also pursuing his second master’s degree. He previously earned an M.A. in Philosophy at SDSU and a B.A. in Human Communication from CSU Monterey Bay. Castillo occasionally teaches in the philosophy department and identifies as an interdisciplinary scholar.

References 

Melzer, D. (2023). Reconstructing response to student writing: a national study from across the curriculum. Utah State University Press.

Review of Asao B. Inoue’s Cripping Labor-Based Grading for More Equity in Literacy Courses

Reviewed by Kat M. Gray, University of Arkansas

Inoue, A. B. (2024). Cripping Labor-Based Grading for More Equity in Literacy Courses. The WAC Clearinghouse; University Press of Colorado. Retrieved from https://wacclearinghouse.org/books/practice/cripping/

The first time I gave feedback on student writing I froze, disarmed by what I now know is a common question: How do I know I’m assessing the right way? I tested rubrics, weighted point scales, portfolio grading, and more, but most systems felt like justifying my quality judgements about student writing in support of what I didn’t yet know to call “habits of white language” (Inoue, 2021). Teacher-scholars inside and outside our discipline have repeatedly acknowledged this problem in critiques of writing assessment practices (Butler, Casmier, Flores, et al., 1974; Kohn, 2006; Kynard, 2008, 2013; Baker-Bell, Williams-Farrier, Jackson, et al., 2020; Blum, 2020; Stommel, 2020).

In 2019, I read Asao Inoue’s Labor-Based Grading Contracts: Building Equity and Inclusion in the Compassionate Writing Classroom. Inoue offered Labor-Based Grading (LBG) as a tool to value the work students do over the “quality” of their written products. LBG relies on completeness, measured through word requirements and clear labor instructions. Students use labor logs and reflective writing to assign value to their work, and instructors focus on feedback to guide students through the process. When I began using LBG, students repeatedly impressed me with their investments in process, experimentation, feedback, and revision.

In 2024, Inoue wrote Cripping Labor-Based Grading for More Equity in Literacy Courses, a monograph responding to disciplinary conversations and critiques of LBG. In this text, Inoue engages with disability studies to create a theoretical framework that accounts for the biases inherent in quantifying labor and time and models more flexible, intersectional heuristics for writing assessment. 

In Chapters 1-3, Inoue incorporates insights from disability studies. Chapter 1 explores claims that LBG advances ableist and neurotypical performance expectations and disadvantages learners who don’t (or can’t) fit. For Inoue, this is an opportunity to improve how LBG foregrounds completeness over quality. In Chapter 2, Inoue creates an intersectional definition of disability that allows more students to succeed by reconstituting labor and its measurements. Finally, he explores how “crip time” changes labor. Referencing Margaret Price, Tara Wood, and Allison Kafer, Inoue (2024) explains crip time as “a reorientation to time” (p. 18) that asks us to be “more capacious” and “more generous” (p. 19) in understanding what successful processes and outcomes look like. He defines crip labor as labor that “considers the ability to labor as universal but flexible, open-ended in terms of what it looks like, feels like, or is expected to be or produce” (Inoue, 2024, p. 22). This definition challenges notions of student progress that disadvantage marginalized learners.

Chapters 4 and 10 respond to Ellen Carillo’s The Hidden Inequities in Labor-Based Contract Grading. Chapter 4 discusses the critique that labor is construed as “neutral and quantifiable” (Inoue, 2024, p. 25). Inoue agrees that without a definition of disability to structure labor expectations, this is a risk. However, reflection and metacognition are critical “talk-back” moments; through reflection, Inoue (2024) understands “[w]hat labor means to a student” and thereby “the success or effectiveness of the ecology” (p. 27). Critically, only a student can articulate this meaning. Chapter 10 examines Engagement Based Grading (EBG), Carillo’s alternative. EBG centers how students engage with a course: students choose how to labor and instructors assess their choices. However, making sure students know how to choose is an equity issue (Inoue, 2024, pp. 99-101). Further, “engagement” is a problematic standard given the difficulty of measuring a phenomenological experience (Inoue, 2024, p. 75).

Chapters 5-9 respond to other critiques. Particularly important is Inoue’s (2024) attention to contract negotiations in Chapter 5, comparing “forced intimacy” (p. 33) and “access intimacy” (p. 34). Disability justice activist Mia Mingus (2021) defines access intimacy as “that elusive, hard to describe feeling when someone else ‘gets’ your access needs” (para. 4). Access intimacy is not “charity, resentfulness enacted, intimidation, a humiliating trade for survival or an ego boost” (Mingus, 2021, para. 9). Access intimacy sees contract negotiations as complex, engaged, and relational, not fill-in-the-blank exercises.

Chapters 6 and 9 explore quantitative measures of labor and affective attachment to grades. Inoue (2024) reminds us of Peter Elbow’s warning about “a deep hunger to rank” (p. 87) in writing classrooms. Ranking promotes “racist culture and White supremacist discourse,” using allegedly neutral measures to decide “who is ‘better,’ who is more valuable, who is more deserving” (Inoue, 2024, p. 87-88). However, removing these standards may disadvantage neurodivergent students who rely on structure and predictability to learn. We must replace grades as the structural support for courses; flexible measures are especially critical. For example, time estimates in LBG should strive for “reasonable accuracy”while clarifying that labor looks different for different students (Inoue, 2024, p. 45). 

Chapters 7 and 8 explore how hidden quality judgements become implicit in labor standards and how to redirect biases in grading ecologies. Biases accumulate in rigid time and labor expectations, which disadvantage a wide variety of students. “[I]nherently neutral measures” (p. 56) do not exist – measures of labor are not an “accounting system” (p. 75) or surveillance practice (Inoue, 2024). Rather, labor practices are negotiated with student input. In turn, formative feedback should “offer the teacher’s experiences of the student’s written work for their benefit” (Inoue, 2024, p. 78). Feedback should not “justify a grade,” “determine completion of [an] assignment,” “substantiate any decision about an assignment,” or “articulate future quality or labor expectations” (Inoue, 2024, p. 78). 

To close, Inoue gives suggestions for revising LBG ecologies. First, he writes, “[t]he highest grade possible should simply be the default grade in the contract” (Inoue, 2024, p. 81). Open access to an A increases equity for all students in the classroom. Chapter 11 is particularly helpful for experienced LBG practitioners as a checklist for retooling LBG assessment. Teachers interested in trying LBG should read this book after Inoue’s (2019) introduction. The appendices provide updated sample documents critical for setting the scope and tone of contract negotiations at the outset of a course.

Ultimately, Inoue’s book reminds us of our duty to continue asking hard questions about assessment. No standards are neutral – approaching equitable writing assessment requires intersectional framing, regular critical reflection, and thoughtful revision.

Kat M. Gray (PhD) works as Assistant Director for the Program in Rhetoric and Composition at the University of Arkansas. Their research areas include cultural rhetorics, technical communication pedagogies and curriculum design, and queer rhetorics. They live with their partner and cat in beautiful Fayetteville, Arkansas on Quapaw, Caddo, Osage and Očhéthi Šakówiŋ Sioux lands.

References

Baker-Bell, A., Williams-Farrier, B.J., Jackson, D., Johnson, L., Kynard, C., and McMurtry, T. (2020). This ain’t another statement! This is a DEMAND for Black Linguistic Justice! Retrieved from https://cccc.ncte.org/cccc/demand-for-black-linguistic-justice

Blum, S. D. (Ed.) (2020). Ungrading: Why rating students undermines learning (and what to do instead). West Virginia University Press.

Butler, M., Casmier, A., Flores, N., Giannasi, J., Harrison, M., Hogan, R., Lloyd-Jones, R., Long, R.A., Martin, E., McPherson, E., Prichard, N., Smitherman, G., Winterowd, W.R. (1974). Students’ rights to their own language. Retrieved from https://prod-ncte-cdn.azureedge.net/nctefiles/groups/cccc/newsrtol.pdf

Carillo, E. C. (2021). The hidden inequities in labor-based contract grading. Logan, UT: Utah State University Press.

Inoue, A. B. (2019). Labor-based grading contracts: Building equity and inclusion in the compassionate writing classroom. Fort Collins, CO: WAC Clearinghouse. Retrieved from https://wac.colostate.edu/books/perspectives/labor/.

Inoue, A. B. (2021). The habits of white language (HOWL). What It Means To Be An Antiracist Teacher: Cultivating Antiracist Orientations in the Literacy Classroom. Retrieved from http://asaobinoue.blogspot.com/2021/07/blogbook-habits-of-white-language-howl.html.  

Inoue, A. B. (2024). Cripping Labor-Based Grading for More Equity in Literacy Courses. The WAC Clearinghouse; University Press of Colorado. Retrieved from https://wacclearinghouse.org/books/practice/cripping/.   

Kohn, A. (2006). The trouble with rubrics. English Journal, 95(4). Retrieved from http://www.alfiekohn.org/article/trouble-rubrics/.  

Kynard, C. (2008). Writing while Black: The Colour Line, Black discourses, and assessment in the institutionalization of writing instruction. English Teaching: Practice and Critique, 7(2).

Kynard, C. (2013). Self-Determined…and OF COLOR. Retrieved from http://carmenkynard.org/self-determined-color/

Mingus, M. (2011). Access intimacy: the missing link. Retrieved from https://leavingevidence.wordpress.com/2011/05/05/access-intimacy-the-missing-link/.

Stommel, J. (2020). Ungrading: an FAQ. Retrieved from https://www.jessestommel.com/ungrading-an-faq/

Review of Educational Assessment’s Special Issue: Fairness in Educational Assessment and the Next Edition of the Standards

Reviewed by Jen Daly, University of New Hampshire

Herman, J.L., Bailey, A. L., & Martinez, J. F. (Eds.) (2023). Fairness in educational assessment and the next edition of the Standards. [Special issue]. Educational Assessment, 28(2)

Educational Assessment’s special issue “Fairness in educational assessment and the next edition of the Standards,” organized as a dialogue among three authors, Jennifer Randall, Randy Bennett, and Guillermo Solano-Flores, tackles themes of equity, fairness, and justice-oriented approaches to large-scale standardized testing in U.S. K-12 schools. The issue begins with Randall’s piece “It Ain’t Near ‘Bout Fair: Re-Envisioning the Bias and Sensitivity Review Process from a Justice-Oriented Antiracist Perspective.” Bennett authors the next piece titled “Toward a Theory of Socioculturally Responsive Assessment.” Solano-Flores then responds to both Randall and Bennett in “How Serious Are We about Fairness in Testing and How Far Are We Willing To Go? A Response to Randall and Bennett with Reflections about the Standards for Educational and Psychological Testing.” The issue closes with a summative “Fairness in educational assessment and the next edition of Standards: Concluding Commentary.”

Beginning with “It ain’t near ‘bout fair,” Randall calls for an explicitly anti-racist re-envisioning of the item review stage (also known as Bias/Fairness and Sensitivity Review) of standardized test development. This process is meant to challenge white supremacist logics that have long provided the foundation for large scale educational assessments, and Randall argues that using Critical Race Theory (CRT) and Critical Whiteness Theory (CWT) as frameworks will generate a shift “for learning and, if necessary, unlearning” (p. 70). Randall’s recommendations focus on three actionable revisions: “(1) shift from a fear-oriented to a justice-oriented perspective in the development of guidelines; (2) a re-envisioning of what is meant by barriers and construct irrelevant variance; (3) the need to facilitate the development of the collective critical consciousness of assessment developers and reviewers” (p. 72). According to Randall, current bias and sensitivity review processes are driven by fear and constructed with stakeholders in mind, not students. While these policies claim to avoid traumatizing minority students, they are actually “racism disingenuously cloaked as a concern for the emotional well-being of students” (p. 75). Randall also recommends an antiracist approach to language, which will undercut the dominant assumption that standard edited American English is the formal language. The third recommendation focuses on the education of assessment professionals to decenter whiteness and become intentionally antiracist. These three recommendations are active ways that tests can become sites of resistance to white supremacy and spaces of learning while actively engaging all students in antiracist practices.  

Engaging in the current long-standing dialogue surrounding the abolishment of standardized tests, Randy Bennett next navigates a way to continue utilizing standardized tests with a major overhaul centered on equity and justice. Using existing frameworks of culturally responsive education and culturally relevant pedagogy, Bennett comes to understand tests as cultural artifacts that not only reflect ideologies but also work to perpetuate them. At present, testing models are widely based on antiquated and eugenicist perspectives that have affected vulnerable populations in material and emotional ways (Bennett, 2023). For Bennett, it’s about changing the perspective to accommodate and acknowledge multiple ways of knowing and communicating. Bennett outlines a working definition of socioculturally responsive assessment from five principles outlined in the article:

  1. includes problems that connect to the cultural identity, background, and lived experiences of all individuals, especially from traditionally underserved groups;
  2. allows forms of expression and representation in problem presentation and solution that help individuals show what they know and can do;
  3. promotes deeper learning by design;
  4. adapts to personal characteristics including cultural identity; and
  5. characterizes performance as an interaction among extrinsic and intrinsic factors. (p. 96)

Bennett notes that more research is necessary, but there is no need to wait: there are already successful models of equitable and empowering assessment and a wide variety of current technologies that can offer more individual approaches to testing. 

Taking a historical perspective, Guillermo Solano-Flores examines the use of the term “fairness” and its connection to oppression, power differentials, and inequality, arguing that Randall and Bennett offer two perspectives on two different aspects of testing that should be implemented together rather than thought of as separate recommendations. For Solano-Flores, talk is not enough: “I would like to see a deep and honest recognition of the limitations of assessment systems, not an update of terms and appearances” (p. 105), and Standards for Educational and Psychological Testing is a place to enact actionable change in defining fairness with more “‘must’s’ and fewer ‘should’s’” (p. 114). For Solano-Flores, testing is a component of a much larger societal issue: “there is a price that we…need to pay if we are serious about fair testing. That price has the form of a new system of social and institutional priorities, a change of mentality, and willingness to do things differently” (p. 114).   

While this special issue considered large-scale testing, many of the justice-oriented recommendations could be applied on a smaller scale through use of their theoretical foundations. In the last ten years, the discipline of Writing Studies has focused on writing assessment through a justice-oriented lens engaging with equity driven assessment, such as Asao Inoue and Mya Poe (Inoue, 2015; Poe & Inoue, 2016; Poe, Inoue, &Elliot, 2018; Randall, Slomp, Poe & Oliveri, 2022), Stacy Perryman-Clark (2016), Ann Ruggles Gere (2023), William Banks, Nicole Caswell, and Stephanie West-Puckett (2023), Mary Stewart (2023) and Annie Del Principe (2023). The listed scholars are only a few that have been working to uncover biases built into assessment processes and theorizing more justice driven approaches to assessment in writing programs—many more scholars are entering the conversation daily. Assessment practices are central to creating an equitable writing program, and there is still much work to be done.       

Jen Daly (she/her) is a PhD candidate in English: Composition and Rhetoric at the University of New Hampshire. She has presented on writing assessment at the Conference on College Composition and Communication and received a grant through the Boston Rhetoric and Writing Network for archival work on WPA histories at UNH. Jen is currently working on her dissertation, which examines early 19th century American women’s writing and the creation of metaphorical spaces through worldbuilding in their personal writing. 

References

Banks, W. P., Caswell, N. I., & West-Puckett, S. (2023). Failing Sideways: Queer Possibilities for Writing Assessment. Utah State University Press.

Bennett, R. E. (2023). Toward a Theory of Socioculturally Responsive Assessment. Educational Assessment28(2), 83–104.

Del Principe, A. (2023). Time as a “Built-In Headwind”: The Disparate Impact of Portfolio Cross-assessment on Black TYC students. Journal of Writing Assessment, 16(1). 

Gere, A. R., Curzan, A., Hammond, J. W., Hughes, S., Li, R., Moos, A., Smith, K., Van Zanen, K., Wheeler, K. L., & Zanders, C. J. (2021). Communal Justicing: Writing Assessment, Disciplinary Infrastructure, and the Case for Critical Language Awareness. College Composition and Communication72(3), 384–412. 

Inoue, A. B. (2015). Antiracist Writing Assessment Ecologies: Teaching and Assessing Writing for a Socially Just Future. The WAC Clearinghouse; Parlor Press. 

Perryman-Clark, S. M. (2016). Who We Are(n’t) Assessing: Racializing Language and Writing Assessment in Writing Program Administration. College English79(2), 206–211.

Poe, M., & Inoue, A. B. (2016). Toward Writing as Social Justice: An Idea Whose Time Has Come. College English79(2), 119–126.

Poe, M., Inoue A. B., & Elliot, N. (2018). Writing Assessment, Social Justice, and the Advancement of Opportunity. The WAC Clearinghouse; University Press of Colorado.

Randall, J. (2023). It Ain’t Near ‘Bout Fair: Re-Envisioning the Bias and Sensitivity Review Process from a Justice-Oriented Antiracist Perspective. Educational Assessment28(2), 68–82. 

Solano-Flores, G. (2023). How Serious are We About Fairness in Testing and How Far are We Willing to Go? A Response to Randall and Bennett with Reflections About the Standards for Educational and Psychological TestingEducational Assessment28(2), 105–117. 

Stewart, M. (2022). Confronting the Ideologies of Assimilation and Neutrality in Writing Program Assessment through Antiracist Dynamic Criteria Mapping. Journal of Writing Assessment, 15(1).