Publication Source

Last summer, an academic was puzzled by why their students were suddenly acing the end-of-semester assessments. The usual quizzes were being returned faultlessly. Confident that the inclusion of images would thwart AI tools, the academic mentioned this to a colleague. The colleague’s reply landed with a thud: “You know large language models can work on images as well as text, right?”

Universities face a significant challenge, with tech giants poised to deploy AI-powered learning experiences that could substantially alter the current model. As access to expertise becomes widespread and traditional assessment models are tested by large language models (LLMs), what will distinguish and ensure the value of a university education? Success for universities will hinge on three things: clarity in the teacher-learner relationship, a well-defined institutional mission, and strategic approaches to learning delivery.

We all know that in the era of generative AI and large language models, the usual assessment suspects are dead. The essay, still beloved of many, is perhaps only the most obvious corpse. If we’re serious about preventing widespread plagiarism, many other traditional assessment methods are also no longer viable. Of course, this potential for content generation is currently obsessing academics and managers — how do we stem a flood of “good honours” and the inevitable grade inflation that will catch the eye of the Office for Students? Yet, it’s almost certainly a mistake to approach this challenge from a purely deficit perspective. A more balanced, and perhaps more “academic” approach would give equal emphasis to generative AI’s potential for advancing knowledge and inspire us to develop authentic, AI-resilient modes of assessment.

EdCentral Logo