Exploring the Potentials and Risks of AI for Educational Assessment

William R. Penuel

AI conjures a mix of excitement and fear among educators these days. AI promises to yield advances toward more personalized learning and also to help students collaborate more effectively and equitably with each other. At the same time, the use of AI presents risks to privacy and exposure to algorithmic bias that is introduced at multiple points in the design of AI systems which threaten to reinforce educational inequities. Some have called for more public oversight before AI can be integrated more fully into education. 


The State Performance Assessment Learning Community, a project of the Learning Policy Institute, collaborated with the University of Colorado’s inquiryHub team to engage the learning community in a series focused on AI’s potentials for supporting innovative assessment practices. In the series, we explored different applications of AI with leaders in assessment from across the United States. This is the first of a four-part blog series that describes our discussions.


We began by eliciting from workshop participants’ “shoulds” and “should nots” regarding AI in education, that is, what they thought AI should do to support more equitable outcomes for students. Participants hoped to see AI support more differentiation and inclusion in classrooms and enable faster analysis of student work from assessments, among other things. Participants also worried about AI further perpetuating inequities, replacing teachers, and extending screen time at the expense of face-to-face collaboration. 


Since ChatGPT is currently gaining lots of attention, our session focused on the Large Language Models behind ChatGPT and how they work. In addition, we presented different examples of forms of AI in education that have been studied for decades. One application that has shown promise provides real-time help for students engaged in a science investigation in Earth science and builds from work studying cognitive tutors. 


We also engaged participants in “student hat” with some of the NSF-funded Institute for Student AI Teaming’s (ISAT) new curriculum materials designed to teach students how computers learn. One of these units focuses on how to make online gaming communities more welcoming and inclusive. As part of the session, participants explored different ways human beings collaborate with AI to moderate online spaces. The units are freely available for download at the inquiryHub website. 





A big takeaway from this session is the importance of beginning with our hopes for the future of education in our states, before talking about how AI could be used to support those goals. For any application of AI in education, we can ask:


How well does it help us meet the educational goals we have for students?

To what extent does it reinforce or exacerbate potentially harmful practices (particularly in assessment) or reinforce goals that we don’t hold?


Prompt for discussion:

What’s an AI application you are excited or worried about? How would you answer the above questions about it?


Previous
Previous

Exploring How AI can Support Peer Assessment of Collaborative Learning

Next
Next

What if there's a paradigm shift waiting to happen in how we approach phenomena?