Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Oct 18:11:e57400.
doi: 10.2196/57400.

Large Language Models for Mental Health Applications: Systematic Review

Affiliations

Large Language Models for Mental Health Applications: Systematic Review

Zhijun Guo et al. JMIR Ment Health. .

Abstract

Background: Large language models (LLMs) are advanced artificial neural networks trained on extensive datasets to accurately understand and generate natural language. While they have received much attention and demonstrated potential in digital health, their application in mental health, particularly in clinical settings, has generated considerable debate.

Objective: This systematic review aims to critically assess the use of LLMs in mental health, specifically focusing on their applicability and efficacy in early screening, digital interventions, and clinical settings. By systematically collating and assessing the evidence from current studies, our work analyzes models, methodologies, data sources, and outcomes, thereby highlighting the potential of LLMs in mental health, the challenges they present, and the prospects for their clinical use.

Methods: Adhering to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, this review searched 5 open-access databases: MEDLINE (accessed by PubMed), IEEE Xplore, Scopus, JMIR, and ACM Digital Library. Keywords used were (mental health OR mental illness OR mental disorder OR psychiatry) AND (large language models). This study included articles published between January 1, 2017, and April 30, 2024, and excluded articles published in languages other than English.

Results: In total, 40 articles were evaluated, including 15 (38%) articles on mental health conditions and suicidal ideation detection through text analysis, 7 (18%) on the use of LLMs as mental health conversational agents, and 18 (45%) on other applications and evaluations of LLMs in mental health. LLMs show good effectiveness in detecting mental health issues and providing accessible, destigmatized eHealth services. However, assessments also indicate that the current risks associated with clinical use might surpass their benefits. These risks include inconsistencies in generated text; the production of hallucinations; and the absence of a comprehensive, benchmarked ethical framework.

Conclusions: This systematic review examines the clinical applications of LLMs in mental health, highlighting their potential and inherent risks. The study identifies several issues: the lack of multilingual datasets annotated by experts, concerns regarding the accuracy and reliability of generated content, challenges in interpretability due to the "black box" nature of LLMs, and ongoing ethical dilemmas. These ethical concerns include the absence of a clear, benchmarked ethical framework; data privacy issues; and the potential for overreliance on LLMs by both physicians and patients, which could compromise traditional medical practices. As a result, LLMs should not be considered substitutes for professional mental health services. However, the rapid development of LLMs underscores their potential as valuable clinical aids, emphasizing the need for continued research and development in this area.

Trial registration: PROSPERO CRD42024508617; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=508617.

Keywords: BERT; Bidirectional Encoder Representations from Transformers; ChatGPT; digital health care; large language models; mental health.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow of the selection process. LLM: large language model.
Figure 2
Figure 2
Number of articles included in this literature review, grouped by year of publication and application field. The black line indicates the total number of articles in each year. CA: conversational agent.

References

    1. Mental health. World Health Organization. 2022. Jun 17, [2024-04-15]. https://www.who.int/news-room/fact-sheets/detail/mental-health-strengthe... .
    1. Mental disorders. World Health Organization. 2022. Jun 8, [2024-04-15]. https://www.who.int/news-room/fact-sheets/detail/mental-disorders .
    1. MHPSS worldwide: facts and figures. Government of the Netherlands. [2024-04-15]. https://www.government.nl/topics/mhpss/mhpss-worldwide-facts-and-figures .
    1. Arias D, Saxena S, Verguet S. Quantifying the global burden of mental disorders and their economic value. EClinicalMedicine. 2022 Dec;54:101675. doi: 10.1016/j.eclinm.2022.101675. https://linkinghub.elsevier.com/retrieve/pii/S2589-5370(22)00405-9 S2589-5370(22)00405-9 - DOI - PMC - PubMed
    1. Zhang W, Yang C, Cao Z, Li Z, Zhuo L, Tan Y, He Y, Yao L, Zhou Q, Gong Q, Sweeney JA, Shi F, Lui S. Detecting individuals with severe mental illness using artificial intelligence applied to magnetic resonance imaging. EBioMedicine. 2023 Apr;90:104541. doi: 10.1016/j.ebiom.2023.104541. https://linkinghub.elsevier.com/retrieve/pii/S2352-3964(23)00106-8 S2352-3964(23)00106-8 - DOI - PMC - PubMed

Publication types