Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Aug:326:115334.
doi: 10.1016/j.psychres.2023.115334. Epub 2023 Jul 7.

ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search

Affiliations

ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search

Alessia McGowan et al. Psychiatry Res. 2023 Aug.

Abstract

ChatGPT (Generative Pre-Trained Transformer) is a large language model (LLM), which comprises a neural network that has learned information and patterns of language use from large amounts of text on the internet. ChatGPT, introduced by OpenAI, responds to human queries in a conversational manner. Here, we aimed to assess whether ChatGPT could reliably produce accurate references to supplement the literature search process. We describe our March 2023 exchange with ChatGPT, which generated thirty-five citations, two of which were real. 12 citations were similar to actual manuscripts (e.g., titles with incorrect author lists, journals, or publication years) and the remaining 21, while plausible, were in fact a pastiche of multiple existent manuscripts. In June 2023, we re-tested ChatGPT's performance and compared it to that of Google's GPT counterpart, Bard 2.0. We investigated performance in English, as well as in Spanish and Italian. Fabrications made by LLMs, including erroneous citations, have been called "hallucinations"; we discuss reasons for which this is a misnomer. Furthermore, we describe potential explanations for citation fabrication by GPTs, as well as measures being taken to remedy this issue, including reinforcement learning. Our results underscore that output from conversational LLMs should be verified.

Keywords: Artificial intelligence; Bard; ChatGPT; Citations; Fabrication; Large language models; Linguistic; Literature search; Natural language processing; References.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest None.

References

    1. Alkaissi H, & McFarlane SI (2023). Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus, 15(2), e35179. 10.7759/cureus.35179 - DOI - PMC - PubMed
    1. Bilgrami ZR, Sarac C, Srivastava A, Herrera SN, Azis M, Haas SS, Shaik RB, Parvaz MA, Mittal VA, Cecchi G, & Corcoran CM (2022). Construct validity for computational linguistic metrics in individuals at clinical risk for psychosis: Associations with clinical ratings. Schizophrenia Research, 245, 90–96. 10.1016/j.schres.2022.01.019 - DOI - PMC - PubMed
    1. Corcoran CM, Carrillo F, Fernández‐Slezak D, Bedi G, Klim C, Javitt DC, Bearden CE, & Cecchi GA (2018). Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry, 17(1), 67–75. 10.1002/wps.20491 - DOI - PMC - PubMed
    1. Corcoran CM, Mittal VA, Bearden CE, Gur R, Hitczenko K, Bilgrami Z, Savic A, Cecchi GA, & Wolff P (2020). Language as a Biomarker for Psychosis: A Natural Language Processing Approach. Schizophrenia Research, 226, 158–166. 10.1016/j.schres.2020.04.032 - DOI - PMC - PubMed
    1. Day T (2023). A Preliminary Investigation of Fake Peer-Reviewed Citations and References Generated by ChatGPT. The Professional Geographer, 0(0), 1–4. 10.1080/00330124.2023.2190373 - DOI

LinkOut - more resources