The capability (or otherwise) to detect students’ use of generative AI was the talk of the fringes at Instructurecon, the edtech conference I’ve been at all week.
Down in the exhibition area, a whole host of third party plugin salespeople for Instructure’s Canvas learning management system were attempting to convince wary teachers and learning design professionals that it was their system that could catch students using Chat-GPT and its ilk to protect academic integrity.
The problem was that not only did plenty of the attendees not believe them, even those that did assumed that a high score via one of the tools wouldn’t be enough to prove it and then penalise a student.
The stands canny enough to make clear that their tools might only “start a conversation” were more often than not met with “and what do you suggest the next part of the conversation should be?”