An Examination of Generative AI Response to Suicide Inquires: Content Analysis
- PMID: 40811811
- PMCID: PMC12371289
- DOI: 10.2196/73623
An Examination of Generative AI Response to Suicide Inquires: Content Analysis
Abstract
Background: Generative artificial intelligence (AI) chatbots are an online source of information consulted by adolescents to gain insight into mental health and wellness behaviors. However, the accuracy and content of generative AI responses to questions related to suicide have not been systematically investigated.
Objective: This study aims to investigate general (not counseling-specific) generative AI chatbots' responses to questions regarding suicide.
Methods: A content analysis was conducted of the responses of generative AI chatbots to questions about suicide. In phase 1 of the study, generative chatbots examined include: (1) Google Bard or Gemini; (2) Microsoft Bing or CoPilot; (3) ChatGPT 3.5 (OpenAI); and (4) Claude (Anthropic). In phase 2 of the study, additional generative chatbot responses were analyzed, which included Google Gemini, Claude 2 (Anthropic), xAI Grok 2, Mistral AI, and Meta AI (Meta Platforms). The two phases occurred a year apart.
Results: Findings included a linguistic analysis of the authenticity and tone within the responses using the Linguistic Inquiry and Word Count program. There was an increase in the depth and accuracy of the responses between phase 1 and phase 2 of the study. There is evidence that the responses by the generative AI chatbots were more comprehensive and responsive during phase 2 than phase 1. Specifically, the responses were found to provide more information regarding all aspects of suicide (eg, signs of suicide, lethality, resources, and ways to support those in crisis). Another difference noted in the responses between the first and second phases was the emphasis on the 988 suicide hotline number.
Conclusions: While this dynamic information may be helpful for youth in need, the importance of individuals seeking help from a trained mental health professional remains. Further, generative AI algorithms related to suicide questions should be checked periodically to ensure best practices regarding suicide prevention are being communicated.
Keywords: LIWC; Linguistic Inquiry and Word Count; adolescent suicide; artificial intelligence; chatbots; school counseling.
© Laurie O Campbell, Kathryn Babb, Glenn W Lambie, B Grant Hayes. Originally published in JMIR Mental Health (https://mental.jmir.org).
Conflict of interest statement
Figures
References
-
- Youth Risk Behavior Surveillance System (YRBSS) results. Centers for Disease Control and Prevention (CDC) Oct 31, 2024. [07-03-2025]. https://www.cdc.gov/healthyyouth/data/yrbs/index.htm URL. Accessed.
-
- Polihronis C, Cloutier P, Kaur J, Skinner R, Cappelli M. What’s the harm in asking? A systematic review and meta-analysis on the risks of asking about suicide-related behaviors and self-harm with quality appraisal. Arch Suicide Res. 2022;26(2):325–347. doi: 10.1080/13811118.2020.1793857. doi. Medline. - DOI - PubMed
-
- Nicolopoulos A, Boydell K, Shand F, Christensen H. Why suicide? Adolescent Res Rev. 2018 Jun;3(2):155–172. doi: 10.1007/s40894-017-0070-3. doi. - DOI
MeSH terms
LinkOut - more resources
Full Text Sources
Medical