
Should AI-assisted writing be allowed in academic journals?
The Hindu
The use of AI in academic writing raises concerns about plagiarism, bias, and quality control.
If you search Google Scholar for the phrase “as an AI language model”, you’ll find plenty of AI research literature and also some rather suspicious results. For example, one paper on agricultural technology says, “As an AI language model, I don’t have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements.”
Obvious gaffes like this aren’t the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study examined the frequency of certain words, such as “commendable”, “meticulously” and “intricate” in academic writing, and found they became far more common after the launch of ChatGPT; so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.
Why do AI models overuse these words? There is speculation it’s because they are more common in English as spoken in Nigeria, where key elements of model training often occur.
Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as “contaminating” scholarly literature. Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.
But there are important differences between “plagiarising” text authored by humans and text authored by AI. Those who plagiarise humans’ work receive credit for ideas that ought to have gone to the original author. By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like your phone’s autocomplete function than a human researcher.
Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight, though this is less pronounced in the current version of ChatGPT.
However, other studies have found a different kind of bias in ChatGPT and other large language models: a tendency to reflect a left-liberal political ideology. Any such bias could subtly distort scholarly writing produced using these tools.