For politicians, pundits and psephologists, the tens of thousands of open-text comments regularly collected by the British Election Study (BES) offer a unique and valuable insight into the minds of the UK public. However, the research assistants required to trawl this ocean of text and sort the responses – some 657,000 since 2014 alone – into categories could be forgiven for feeling less enthusiastic about the project. But maybe for not much longer. According to a preprint published in December, ChatGPT, the “large language model” (LLM) chatbot launched in December, was able to code responses with a 92 per cent accuracy rate compared with a trained human coder.
“Coding responses is very time-consuming. I’ve done a few shifts of it and you can do 2,000 to 3,000 responses in a day,” says Ralph Scott, a research associate on the BES based at the University of Manchester, who co-authored the ChatGPT study with colleagues at Manchester and West Point Military Academy. Aided by artificial intelligence, the past three internet surveys, covering more than 81,000 people, were sifted within seconds, says Scott. “It makes those parts of research which don’t need creativity or judgement so much easier,” he explains.
Freeing up time and money for higher-level analysis are not the only immediate benefits that ChatGPT offers, in Scott’s view. It also opens new avenues for social science inquiry, he believes. This is because even BES researchers, backed by multimillion-pound research grants, must still be cautious about setting too many open-text questions for fear of being flooded with data that will take time and money to process, he says. “What is exciting with ChatGPT is that you can ask a lot more questions because we analyse the data so much more easily,” he says. These extra questions “could reveal there are whole undercurrents in political thought that we just haven’t considered”.