In the year since OpenAI released ChatGPT, high school teacher Vicki Davis has been rethinking every single assignment she gives her students. Davis, a computer science teacher at Sherwood Christian Academy in Georgia, was well-positioned to be an early adopter of the technology. She’s also the IT director at the school and helped put together an AI policy in March: the school opted to allow the use of AI tools for specific projects so long as students discussed it with their teachers and cited the tool. In Davis’s mind, there were good and bad uses of AI, and ignoring its growing popularity was not going to help students unlock the productive uses or understand its dangers.

“It’s actually changed how I design my projects because there are some times I want my students to use AI, and then there are times I don’t want them to,” Davis said. “What am I trying to teach here? Is this an appropriate use of AI or not?”

Like teachers across the US and UK, Davis, who also runs the education blog Cool Cat Teacher, spent the summer thinking through what the release of a technology could mean for her.

Generative AI can produce images of the pope in a bomber jacket and answer nearly any math problem, so what could it do for students? Educators like her played with the tools and tried to understand how they work, what the utility could be – for teachers and students alike – and, perhaps most pressingly, how the software could be misused. Some took drastic measures, going so far as to abandon homework assignments as long as the technology was accessible.

“It feels like we’re in some sort of lab experimenting with our kids because it’s changing so rapidly,” Davis said. “If you had asked me about any of this last fall, I couldn’t have told you any of it because ChatGPT didn’t exist.”

EdCentral Logo