Everybody’s talkin’ at me/ I don’t hear a word they’re saying.
Everybody’s Talkin’ lyrics by Fred Neil, 1966
From existential threats to damp squibs; from academic integrity meltdowns to minor modifications: we’ve heard a lot of talk this last year about AI and generative AI in particular.
In a rapidly transforming study and employment landscape, fraught with unknowns, unknowables and significant education-related controversies or broader ethical debates, it is no surprise that it has taken a while to find positive (and collective) approaches given the number of initially tempting blind alleys like “detection” and “banning”. The Russell Group’s principles on the use of generative AI tools in education have helped evolve a growing sense of consensus across UK higher education. The first of these principles states that “universities will support students and staff to become AI literate.”
On the surface, this sounds relatively straightforward: we need to work out what we mean by AI and then establish protocols to ensure that people are literate in their understanding of the capabilities and limitations of those tools and in the implications of the utilisation of them by themselves or their students. For those of us tasked with facilitating AI literacy development (mine is from a faculty development perspective in the King’s Academy, coordinating cross-institutional staff and student guidance, support initiatives and investigatory collaborations) we must begin to find manageable and coherent messaging from within the babble and filter it in ways that is generalisable to the whole community whilst navigating the treacherous path between clarity and over-simplification.