Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Aug 1:14:1213141.
doi: 10.3389/fpsyt.2023.1213141. eCollection 2023.

Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment

Affiliations

Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment

Zohar Elyoseph et al. Front Psychiatry. .

Abstract

ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT's assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.

Keywords: ChatGPT; artificial intelligence; diagnosis; psychological assessment; risk assessment; suicide risk; text vignette.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Figures

Figure 1
Figure 1
ChatGPT’s performance in all four conditions on the suicidal ideation variable, compared to the norms of mental health professionals; *p < 0.01.
Figure 2
Figure 2
ChatGPT’s performance in all four conditions on the risk for suicide attempt variable, compared to the norms of mental health professionals; *p < 0.001.

References

    1. Kirmani AR. Artificial intelligence-enabled science poetry. ACS Energy Letters. (2022) 8:574–76.
    1. Liu Y, Deng G, Xu Z, Li Y, Zheng Y, Zhang Y, et al. Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv Preprint. (2023) arXiv:2305.13860.
    1. Emenike ME, Emenike BU. Was this title generated by ChatGPT? Considerations for artificial intelligence text-generation software programs for chemists and chemistry educators. J Chem Educ. (2023) 100:1413–8. doi: 10.1021/acs.jchemed.3c00063 - DOI
    1. Khalil M., Er E. Will ChatGPT get you caught? Rethinking plagiarism detection. (2023). arXiv [Preprint]. arXiv:2302.04335.
    1. Guo B., Zhang X., Wang Z., Jiang M., Nie J., Ding Y., et al. How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection. (2023). arXiv [Preprint] arXiv:2301.07597.