Language models, such as ChatGPT, Google Bard, claude.ai and pi.ai, are artificial intelligence-powered chatbots that generate intelligent-sounding text when responding to user prompts. Students often encounter them directly or indirectly. How students should employ language model tools has been debated since the public release of ChatGPT in November 2022 (see Zirar, 2023). Some argue that language model tools enhance the student learning experience. Those concerned say that language model tools restrict student learning.
It is tempting to propose that students should only use language model tools as virtual tutors and in the development of early drafts of work, and that they should have the output of these tools thoroughly checked (Farrokhnia et al., 2023). However, students may habitually generate assessed work rather than going through the learning involved (Zirar, 2023). Students who rely heavily on language model tools without verifying the information lose essential skills like critical thinking and analysis (Wu & Yu, 2024).
My recent article in Review of Education (see Zirar, 2023) provides a synthesis based on a review of 25 academic articles, which highlighted two principal themes:
- Students’ exposure to language model tools is only helpful with increased awareness of the limitations of such tools. Student learning can focus on ‘original thoughts’, learning by doing, creative use of knowledge in new settings, and editing and fact-checking.
- Due to the nature of the output of language model tools, human educators will find the output as a source of inspiration or suggestion rather than factual, valid and reliable.