Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Aug 14;20(8):e0330303.
doi: 10.1371/journal.pone.0330303. eCollection 2025.

Evaluating chatbots in psychiatry: Rasch-based insights into clinical knowledge and reasoning

Affiliations

Evaluating chatbots in psychiatry: Rasch-based insights into clinical knowledge and reasoning

Yu Chang et al. PLoS One. .

Abstract

Chatbots are increasingly being recognized as valuable tools for clinical support in psychiatry. This study systematically evaluated the clinical knowledge and reasoning of 27 leading chatbots in psychiatry. Using 160 multiple-choice questions from the Taiwan Psychiatry Licensing Examinations and Rasch analysis, we quantified performance and qualitatively assessed reasoning processes. OpenAI's ChatGPT-o1-preview emerged as the top performer, achieving a Rasch ability score of 2.23, significantly surpassing the passing threshold (p < 0.001). While it excelled in diagnostic and therapeutic reasoning, it also demonstrated notable limitations in factual recall, niche topics, and occasional reasoning biases. Our findings indicate that while advanced chatbots hold significant potential as clinical decision-support tools, their current limitations underscore that rigorous human oversight is indispensable for patient safety. Continuous evaluation and domain-specific training are crucial for the safe integration of these technologies into clinical practice.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. The person–item map (PKMAP) of ChatGPT-o1-preview.
It illustrated the relationship between the chatbot’s ability and the difficulty of the test items. The vertical axis, measured in logits, represents the difficulty level of the questions.

References

    1. Wang Z, Chu Z, Doan TV, Ni S, Yang M, Zhang W. History, development, and principles of large language models-an introductory survey. arXiv. 2024. http://arxiv.org/abs/2402.06853
    1. Chakraborty C, Pal S, Bhattacharya M, Dash S, Lee S-S. Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science. Front Artif Intell. 2023;6:1237704. doi: 10.3389/frai.2023.1237704 - DOI - PMC - PubMed
    1. Reyhan AH, Mutaf Ç, Uzun İ, Yüksekyayla F. A Performance Evaluation of Large Language Models in Keratoconus: A Comparative Study of ChatGPT-3.5, ChatGPT-4.0, Gemini, Copilot, Chatsonic, and Perplexity. J Clin Med. 2024;13(21):6512. doi: 10.3390/jcm13216512 - DOI - PMC - PubMed
    1. Chang Y, Su C-Y, Liu Y-C. Assessing the Performance of Chatbots on the Taiwan Psychiatry Licensing Examination Using the Rasch Model. Healthcare (Basel). 2024;12(22):2305. doi: 10.3390/healthcare12222305 - DOI - PMC - PubMed
    1. Liu M, Okuhara T, Dai Z, Huang W, Okada H, Furukawa E, et al. Performance of Advanced Large Language Models (GPT-4o, GPT-4, Gemini 1.5 Pro, Claude 3 Opus) on Japanese Medical Licensing Examination: A Comparative Study. medRxiv. 2024. 2024.07.09.24310129. https://www.medrxiv.org/content/10.1101/2024.07.09.24310129v1 - DOI

LinkOut - more resources