A few weeks ago a team of researchers from China published a paper on fossil fuel markets. It appeared in a respected peer-reviewed science journal but as he read it, Professor Guillaume Cabanac, of the University of Toulouse, noticed that some of the equations seemed to be incorrect.
He also spotted something much odder — a line that read: “As an AI language model, I am unable to generate specific tables or conduct tests.”
The sentence appears to be a smoking gun, a sign that the authors had used ChatGPT, a powerful artificial intelligence system, to produce the work — a practice that, when done less carelessly, threatens to make academic fraud far harder to detect.