Skip to main content
for

When ChatGPT writes scientific abstracts, can it fool study reviewers?

Even skeptical reviewers can’t spot all the fake abstracts

  • Human reviewers could only detect fake abstracts 68% of the time
  • If used unscrupulously, ChatGPT could undermine scientific research
  • AI language models also could be used for good in scientific writing

 CHICAGO --- Could the new and wildly popular chatbot ChatGPT convincingly produce fake abstracts that fool scientists into thinking those studies are the real thing?

That was the question worrying Northwestern Medicine physician-scientist Dr. Catherine Gao when she designed a study -- collaborating with University of Chicago scientists --to test that theory. 

Yes, scientists can be fooled, their new study reports. Blinded human reviewers – when given a mix real and falsely generated abstracts -- could only spot ChatGPT generated abstracts 68% of the time. The reviewers also incorrectly identified 14% of real abstracts as being AI generated.

“Our reviewers knew that some of the abstracts they were being given were fake, so they were very suspicious,” said corresponding author Gao, an instructor in pulmonary and critical care medicine at Northwestern University Feinberg School of Medicine. “This is not someone reading an abstract in the wild. The fact that our reviewers still missed the AI-generated ones 32% of the time means these abstracts are really good. I suspect that if someone just came across one of these generated abstracts, they wouldn’t necessarily be able to identify it as being written by AI.”

The hard-to-detect fake abstracts could undermine science, Gao said. “This is concerning because ChatGPT could be used by ‘paper mills’ to fabricate convincing scientific abstracts,” Gao said. “And if other people try to build their science off these incorrect studies, that can be really dangerous.”

Paper mills are illegal organizations that produce fabricated scientific work for profit.

The ease with which ChatGPT produces realistic and convincing abstracts could increase production by paper mills and fake submissions to journals and scientific conferences, Gao worries.

The paper was published in a preprint on bioRxiv.  Preprints are not yet peer reviewed and should be considered preliminary findings.

AI sleuths can identify AI fakes

For the study, Gao and co-investigators took titles from recent papers from high-impact journals and asked ChatGPT to generate abstracts based on that prompt. They ran these generated abstracts and the original abstracts through a plagiarism detector and AI output detector, and had blinded human reviewers try to differentiate between generated and original abstracts. Each reviewer was given 25 abstracts that were a mixture of the generated and original abstracts and asked to give a binary score of what they thought the abstract was.

“The ChatGPT-generated abstracts were very convincing,” Gao said, “because it even knows how large the patient cohort should be when it invents numbers.” For a study on hypertension, which is common, ChatGPT included tens of thousands of patients in the cohort, while a study on a monkeypox had a much smaller number of participants.

“Our reviewers commented that it was surprisingly difficult to differentiate between the real and fake abstracts,” Gao said.

The study found that the fake abstracts did not set off alarms using traditional plagiarism-detection tools. However, in the study, AI output detectors such as GPT-2 Output Detector, which is available online and free, could discriminate between real and fake abstracts.

“We found that an AI output detector was pretty good at detecting output from ChatGPT and suggest that it be included in the scientific editorial process as a screening process to protect from targeting by organizations such as paper mills that may try to submit purely generated data,” Gao said. 

ChatGPT also can be used for good

But ChatGPT can also be used for good, said senior study author Yuan Luo, director of the Institute for Augmented Intelligence in Medicine at Feinberg.

“AI language models such as ChatGPT have a potential to help automate the writing process, which is often the speed bottleneck in knowledge generation and dissemination,” Luo said. "The results from the paper showed this is likely doable for the field of medicine, but we need to bridge certain ethical and practical gaps."

For example, is AI-assisted writing still considered original, Luo asked. Also, AI-generated text currently has difficulty in proper citation, which is a must for scientific writing, he noted.

“Generative text technology has a great potential for democratizing science, for example making it easier for non-English-speaking scientists to share their work with the broader community,” said senior author Dr. Alexander Pearson, director of data sciences and the Head/Neck Cancer Program in Hematology/Oncology at the University of Chicago. “At the same time, it’s imperative that we think carefully on best practices for use.”

Wife and husband’s dinner table chat inspired the research

Gao was inspired to do the research as a result of a dinner table discussion with her husband and study co-author Dr. Frederick Howard, an instructor in hematology and oncology at the University of Chicago. His research focuses on using artificial intelligence to improve the prediction of response to therapies in breast cancer. Gao and Howard have previously collaborated on computational projects.

“When I read about ChatGPT writing everything from sonnets to school essays, I wondered if it could write scientific abstracts,” Gao recalled. “I asked it to write an abstract about a hypothetical machine-learning study focusing on pneumonia in the intensive care unit. It gave me a scarily good abstract.”

She showed it to Howard, who was immediately intrigued.

“There’s a great need to characterize the accuracy of scientific writing generated by ChatGPT and develop tools to identify when text has been written by these kinds of models,” Howard said.

The scientific community is still debating the boundaries of acceptable use of large language models like this, the authors said. Where is the line between using ChatGPT to help polish one’s writing as opposed to doing the bulk of the work?

“I think it’s important that if people do use ChatGPT to help with writing, they should be very open to disclosing it,” Gao said. “I have even seen a preprint that included ChatGPT as an author. It’s a really interesting question, and I’m excited to watch the discussion around these boundaries unfold.”

Other authors on the paper are Nikolay S. Markov from Northwestern and Emma C. Dyer and Siddhi Ramesh from the University of Chicago.