Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
- PMID: 36811129
- PMCID: PMC9939079
- DOI: 10.7759/cureus.35179
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
Abstract
While still in its infancy, ChatGPT (Generative Pretrained Transformer), introduced in November 2022, is bound to hugely impact many industries, including healthcare, medical education, biomedical research, and scientific writing. Implications of ChatGPT, that new chatbot introduced by OpenAI on academic writing, is largely unknown. In response to the Journal of Medical Science (Cureus) Turing Test - call for case reports written with the assistance of ChatGPT, we present two cases one of homocystinuria-associated osteoporosis, and the other is on late-onset Pompe disease (LOPD), a rare metabolic disorder. We tested ChatGPT to write about the pathogenesis of these conditions. We documented the positive, negative, and rather troubling aspects of our newly introduced chatbot's performance.
Keywords: artificial intelligence and education; artificial intelligence and writing; artificial intelligence in medicine; chatbot; chatgpt.
Copyright © 2023, Alkaissi et al.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures



References
-
- Survey of hallucination in natural language generation. Ji Z, Lee N, Frieske R, et al. ACM Comput Surv. 2022
-
- Abstracts written by ChatGPT fool scientists. [Preprint] Gao CA, Howard FM, Nikolay S. bioRxiv. 2022
-
- ChatGPT listed as author on research papers: many scientists disapprove. Stokel-Walker C. Nature. 2023;613:620–621. - PubMed
Publication types
LinkOut - more resources
Full Text Sources
Other Literature Sources
Miscellaneous