Publication Source

Imagine a world where a machine could offer students personalised feedback, generate new content tailored to their needs, or even predict their learning outcomes. With the rapid emergence of generative AI – notably the likes of ChatGPT and other large language models (LLMs) – such a world seems to be on our doorstep (Kasneci et al., 2023). However, as the horizon of education broadens with these advancements, we must also consider the maze of ethical challenges that lie ahead (Schramowski et al., 2022).

Educational research has seen accelerated growth in its relationship with LLMs, as evidenced by our scoping review of 118 peer-reviewed empirical studies (Yan et al., 2023). These studies unveiled that LLMs have found their way into a staggering 53 types of application scenarios in the automation of educational tasks, ranging from predicting learning outcomes and generating personalised feedback to creating assessment content and recommending learning resources.

While this paints a vivid picture of the vast potential LLMs offer in reshaping educational methodologies, the picture is not devoid of challenges. Many of the current innovations utilising LLMs have yet to be rigorously tested in real-world educational settings. Furthermore, the transparency surrounding these models often remains confined to a niche group of AI researchers and practitioners. This insularity raises valid concerns about the broader accessibility and utility of these tools in the educational sphere.

EdCentral Logo