Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment
- PMID: 37593450
- PMCID: PMC10427505
- DOI: 10.3389/fpsyt.2023.1213141
Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment
Abstract
ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT's assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.
Keywords: ChatGPT; artificial intelligence; diagnosis; psychological assessment; risk assessment; suicide risk; text vignette.
Copyright © 2023 Elyoseph and Levkovich.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Figures
References
-
- Kirmani AR. Artificial intelligence-enabled science poetry. ACS Energy Letters. (2022) 8:574–76.
-
- Liu Y, Deng G, Xu Z, Li Y, Zheng Y, Zhang Y, et al. Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv Preprint. (2023) arXiv:2305.13860.
-
- Emenike ME, Emenike BU. Was this title generated by ChatGPT? Considerations for artificial intelligence text-generation software programs for chemists and chemistry educators. J Chem Educ. (2023) 100:1413–8. doi: 10.1021/acs.jchemed.3c00063 - DOI
-
- Khalil M., Er E. Will ChatGPT get you caught? Rethinking plagiarism detection. (2023). arXiv [Preprint]. arXiv:2302.04335.
-
- Guo B., Zhang X., Wang Z., Jiang M., Nie J., Ding Y., et al. How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection. (2023). arXiv [Preprint] arXiv:2301.07597.
LinkOut - more resources
Full Text Sources
Miscellaneous
