Evaluating chatbots in psychiatry: Rasch-based insights into clinical knowledge and reasoning
- PMID: 40811649
- PMCID: PMC12352759
- DOI: 10.1371/journal.pone.0330303
Evaluating chatbots in psychiatry: Rasch-based insights into clinical knowledge and reasoning
Abstract
Chatbots are increasingly being recognized as valuable tools for clinical support in psychiatry. This study systematically evaluated the clinical knowledge and reasoning of 27 leading chatbots in psychiatry. Using 160 multiple-choice questions from the Taiwan Psychiatry Licensing Examinations and Rasch analysis, we quantified performance and qualitatively assessed reasoning processes. OpenAI's ChatGPT-o1-preview emerged as the top performer, achieving a Rasch ability score of 2.23, significantly surpassing the passing threshold (p < 0.001). While it excelled in diagnostic and therapeutic reasoning, it also demonstrated notable limitations in factual recall, niche topics, and occasional reasoning biases. Our findings indicate that while advanced chatbots hold significant potential as clinical decision-support tools, their current limitations underscore that rigorous human oversight is indispensable for patient safety. Continuous evaluation and domain-specific training are crucial for the safe integration of these technologies into clinical practice.
Copyright: © 2025 Chang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures
References
-
- Wang Z, Chu Z, Doan TV, Ni S, Yang M, Zhang W. History, development, and principles of large language models-an introductory survey. arXiv. 2024. http://arxiv.org/abs/2402.06853
-
- Liu M, Okuhara T, Dai Z, Huang W, Okada H, Furukawa E, et al. Performance of Advanced Large Language Models (GPT-4o, GPT-4, Gemini 1.5 Pro, Claude 3 Opus) on Japanese Medical Licensing Examination: A Comparative Study. medRxiv. 2024. 2024.07.09.24310129. https://www.medrxiv.org/content/10.1101/2024.07.09.24310129v1 - DOI
MeSH terms
LinkOut - more resources
Full Text Sources