Montreal researchers put ChatGPT to the test, tell scientists to beware
CTV
Scientists who rely on artificial intelligence when they write their research papers could wind up spreading misinformation, according to a new Montreal study that looked at the quality and accuracy of information specifically from ChatGPT.
Scientists who rely on artificial intelligence when they write their research papers could wind up spreading misinformation, according to a new Montreal study that looked at the quality and accuracy of information specifically from ChatGPT.
The results of the study, published in Mayo Clinic Proceedings: Digital Health, showed that the chatbot provided dubious answers and included factual errors and fabricated references.
"I was shocked by the extent of fabricated references. I wasn't expecting it to be that significant," said co-author Dr.Esli Osmanlliu, an emergency physician at the Montreal Children's Hospital.
To test the AI model, he and two colleagues at CHU Sainte-Justine — Dr. Jocelyn Gravel and Madeleine D’Amours-Gravel — asked ChatGPT 20 medical questions that came from existing studies. Then, they asked the authors of the papers to rate its answers.
Out of 20 authors, 17 participated and concluded that most of the information the chatbot returned was questionable, resulting in a median score of 60 per cent.
"Specifically, in a certain case, it was suggesting a steroid be given by injection whereas, in fact, the treatment is by mouth so that's a pretty significant difference in the way we administer a medication," Osmanlliu said.
Nearly 70 per cent of the study references provided were concocted by the AI tool and just as concerning, they looked at first glance to be authentic, which means scientists can be more easily duped.