Josh Freeman’s fascinating new HEPI Policy Note produced in association with Kortext found that more than half of students have used generative AI for assessments. The note, packed full of information about the amazing opportunity afforded by AI, suggested that despite the headline, fewer than 5% of students ‘are likely to’ have used AI to cheat – basing this on students’ own reporting.
The wide and generally balanced media commentary focused on this ‘more than half’ statistic, but primarily in a tone of academic curiosity more than indignation – doubtless reassured by the assertion ‘fewer than 5%’ are likely to have used it to cheat.
Of course, there is no reliable way to know whether AI has been used to cheat, because systems cannot detect it reliably, and cheating is a major problem even when the numbers are small. You only need to see media coverage if an A level paper is leaked to understand how public confidence is affected by cheating and to look at social media to see the fury among students. Because of the way that grades are maintained (to avoid grade inflation), if some people achieve better grades than they earned, others will probably achieve worse grades than they earned. And if some people believe that cheating works, others will follow – they will feel compelled to.
Some say this doesn’t matter because too much is made of assessments, and employers don’t value the things assessments measure. They have a case (there’s a much bigger debate to be had about what can be reliably measured by assessments) but are missing the point. Assessments are used to grade and award the degrees which are a proxy for what has been learned. Degrees help people get good jobs. If it is no longer necessary to acquire know-how about a subject (other than how to prompt a machine to answer questions), why bother learning and what is the degree worth? These are obviously questions of existential importance to universities.