From statistics to deep learning: Using large language models in psychiatric research
- PMID: 39777756
- PMCID: PMC11707704
- DOI: 10.1002/mpr.70007
From statistics to deep learning: Using large language models in psychiatric research
Abstract
Background: Large Language Models (LLMs) hold promise in enhancing psychiatric research efficiency. However, concerns related to bias, computational demands, data privacy, and the reliability of LLM-generated content pose challenges. GAP: Existing studies primarily focus on the clinical applications of LLMs, with limited exploration of their potentials in broader psychiatric research.
Objective: This study adopts a narrative review format to assess the utility of LLMs in psychiatric research, beyond clinical settings, focusing on their effectiveness in literature review, study design, subject selection, statistical modeling, and academic writing.
Implication: This study provides a clearer understanding of how LLMs can be effectively integrated in the psychiatric research process, offering guidance on mitigating the associated risks and maximizing their potential benefits. While LLMs hold promise for advancing psychiatric research, careful oversight, rigorous validation, and adherence to ethical standards are crucial to mitigating risks such as bias, data privacy concerns, and reliability issues, thereby ensuring their effective and responsible use in improving psychiatric research.
Keywords: artificial intelligence; clinical psychiatry; large language models; machine learning; psychiatric epidemiology; psychiatry.
© 2025 The Author(s). International Journal of Methods in Psychiatric Research published by John Wiley & Sons Ltd.
Conflict of interest statement
JT has research support from Otsuka and is an adviser to Precision Mental Wellness. All other authors have no conflict of interest.
References
-
- Antu, S. A. , Chen, H. , & Richards, C. K. (2023). Using LLM (Large Language model) to improve efficiency in literature review for undergraduate research. In Moore S., Stamper J. C., Tong R., Cao C., Liu Z., Hu X., Lu Y., Liang J., Khosravi H., Denny P., Singh A., & C. Brooks. (Eds.), Proc workshop empower educ LLMs ‐ ‐gen interface content gener 2023 Co‐located 24th int conf artif intell educ AIED 2023 Tokyo jpn july 7 2023 (pp. 8–16). CEUR‐WS.org.
-
- Artificial Intelligence (AI) Nature Portfolio . [Internet]. Retrieved Feb 26, 2024 from https://www.nature.com/nature‐portfolio/editorial‐policies/ai
-
- Black, S. , Biderman, S. , Hallahan, E. , Anthony, Q. , Gao, L. , Golding, L. , He, H. , Leahy, C. , McDonell, K. , Phang, J. , Pieler, M. , Prashanth, U. S. , Purohit, S. , Reynolds, L. , Tow, J. , Wang, B. , & Weinbach, S. (2022). GPT‐NeoX‐20B: An open‐source autoregressive language model. In Fan A., Ilic S., Wolf T., & Gallé M. (Eds.), Proc BigScience episode 5 – workshop chall perspect creat large lang models [internet] (pp. 95–136). virtual+Dublin: Association for Computational Linguistics. https://aclanthology.org/2022.bigscience‐1.9
-
- Chandel, S. , Clement, C. B. , Serrato, G. , & Sundaresan, N. (2022). Training and evaluating a jupyter notebook data science assistant [internet]. arXiv. Retrieved Apr 4, 2024 from http://arxiv.org/abs/2201.12901
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Miscellaneous