Publication Source

We’re sure, like us, you’ve seen it all in past weeks; from articles suggesting AI can create academic papers good enough for journals, to lecturers being urged to review their assessments in light of ChatGPT’s disruptive capabilities.

But are AI text generation tools really the problem? Or do they reveal more serious issues around assessment practices and the academic/student relationship?

If we continue with current assessment methods, there’s no clear solution on the horizon to mitigate against the use of AI tools. We’ve seen some efforts to employ the “detection tool” approach used for other forms of academic malpractices – but every single one of them has been beaten in practice, and many flag the work of humans as AI derived.

Simply restricting access is not an option – the technical landscape is moving quickly with Microsoft and others are releasing a range of AI enhanced tools (such as Bing Search with ChatGPT) and platforms (such as Microsoft Teams). A myriad of new AI large language models (LLMs) are in the works, or soon to be released, such as Google’s Bard, or NVIDIA’s NeMo. 

EdCentral Logo