Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers
- PMID: 37100871
- PMCID: PMC10133283
- DOI: 10.1038/s41746-023-00819-6
Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers
Abstract
Large language models such as ChatGPT can produce increasingly realistic text, with unknown information on the accuracy and integrity of using these models in scientific writing. We gathered fifth research abstracts from five high-impact factor medical journals and asked ChatGPT to generate research abstracts based on their titles and journals. Most generated abstracts were detected using an AI output detector, 'GPT-2 Output Detector', with % 'fake' scores (higher meaning more likely to be generated) of median [interquartile range] of 99.98% 'fake' [12.73%, 99.98%] compared with median 0.02% [IQR 0.02%, 0.09%] for the original abstracts. The AUROC of the AI output detector was 0.94. Generated abstracts scored lower than original abstracts when run through a plagiarism detector website and iThenticate (higher scores meaning more matching text found). When given a mixture of original and general abstracts, blinded human reviewers correctly identified 68% of generated abstracts as being generated by ChatGPT, but incorrectly identified 14% of original abstracts as being generated. Reviewers indicated that it was surprisingly difficult to differentiate between the two, though abstracts they suspected were generated were vaguer and more formulaic. ChatGPT writes believable scientific abstracts, though with completely generated data. Depending on publisher-specific guidelines, AI output detectors may serve as an editorial tool to help maintain scientific standards. The boundaries of ethical and acceptable use of large language models to help scientific writing are still being discussed, and different journals and conferences are adopting varying policies.
© 2023. The Author(s).
Conflict of interest statement
A.T.P. reports no competing interests for this work, and reports personal fees from Prelude Therapeutics Advisory Board, Elevar Advisory Board, AbbVie consulting, Ayala Advisory Board, and Privo Therapeutics, all outside of submitted work. The remaining authors declare no competing interests.
Figures
Comment in
-
Experimenting with ChatGPT: Concerns for academic medicine.J Am Acad Dermatol. 2023 Sep;89(3):e127-e129. doi: 10.1016/j.jaad.2023.04.045. Epub 2023 May 11. J Am Acad Dermatol. 2023. PMID: 37179029 No abstract available.
References
-
- OpenAI. ChatGPT: Optimizing language models for dialogue. OpenAIhttps://openai.com/blog/chatgpt/ (2022).
-
- Shankland, S. ChatGPT: Why everyone is obsessed this mind-blowing AI chatbot. CNEThttps://www.cnet.com/tech/computing/chatgpt-why-everyone-is-obsessed-thi... (2022).
-
- Agomuoh, F. ChatGPT: how to use the viral AI chatbot that took the world by storm. Digital Trendshttps://www.digitaltrends.com/computing/how-to-use-openai-chatgpt-text-g... (2022).
-
- Hern, A. AI bot ChatGPT stuns academics with essay-writing skills and usability. The Guardian (2022).
-
- Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N. & Ahmad, H. “I think this is the most disruptive technology”: exploring sentiments of ChatGPT early adopters using Twitter data. https://arxiv.org/abs/2212.05856 (2022).
Grants and funding
- 2022YIA-6675470300/American Society of Clinical Oncology (ASCO)
- 2022YIA-6675470300/Breast Cancer Research Foundation (BCRF)
- U01TR003528/U.S. Department of Health & Human Services | NIH | National Center for Advancing Translational Sciences (NCATS)
- R01LM013337/U.S. Department of Health & Human Services | NIH | U.S. National Library of Medicine (NLM)
- U01-CA243075/U.S. Department of Health & Human Services | NIH | National Cancer Institute (NCI)
LinkOut - more resources
Full Text Sources
