Assessing the Accuracy of Diagnostic Capabilities of Large Language Models
- PMID: 40647657
- PMCID: PMC12248924
- DOI: 10.3390/diagnostics15131657
Assessing the Accuracy of Diagnostic Capabilities of Large Language Models
Abstract
Background: In recent years, numerous artificial intelligence applications, especially generative large language models, have evolved in the medical field. This study conducted a structured comparative analysis of four leading generative large language models (LLMs)-ChatGPT-4o (OpenAI), Grok-3 (xAI), Gemini-2.0 Flash (Google), and DeepSeek-V3 (DeepSeek)-to evaluate their diagnostic performance in clinical case scenarios. Methods: We assessed medical knowledge recall and clinical reasoning capabilities through staged, progressively complex cases, with responses graded by expert raters using a 0-5 scale. Results: All models performed better on knowledge-based questions than on reasoning tasks, highlighting the ongoing limitations in contextual diagnostic synthesis. Overall, DeepSeek outperformed the other models, achieving significantly higher scores across all evaluation dimensions (p < 0.05), particularly in regards to medical reasoning tasks. Conclusions: While these findings support the feasibility of using LLMs for medical training and decision support, the study emphasizes the need for improved interpretability, prompt optimization, and rigorous benchmarking to ensure clinical reliability. This structured, comparative approach contributes to ongoing efforts to establish standardized evaluation frameworks for integrating LLMs into diagnostic workflows.
Keywords: artificial intelligence; diagnostic accuracy; large language modes; medical education.
Conflict of interest statement
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Figures






Similar articles
-
Evaluating the Reasoning Capabilities of Large Language Models for Medical Coding and Hospital Readmission Risk Stratification: Zero-Shot Prompting Approach.J Med Internet Res. 2025 Jul 30;27:e74142. doi: 10.2196/74142. J Med Internet Res. 2025. PMID: 40737604 Free PMC article.
-
A multi-dimensional performance evaluation of large language models in dental implantology: comparison of ChatGPT, DeepSeek, Grok, Gemini and Qwen across diverse clinical scenarios.BMC Oral Health. 2025 Jul 28;25(1):1272. doi: 10.1186/s12903-025-06619-6. BMC Oral Health. 2025. PMID: 40721763 Free PMC article.
-
Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology.Int J Lang Commun Disord. 2025 Jul-Aug;60(4):e70088. doi: 10.1111/1460-6984.70088. Int J Lang Commun Disord. 2025. PMID: 40627744 Review.
-
A structured evaluation of LLM-generated step-by-step instructions in cadaveric brachial plexus dissection.BMC Med Educ. 2025 Jul 1;25(1):903. doi: 10.1186/s12909-025-07493-0. BMC Med Educ. 2025. PMID: 40598351 Free PMC article.
-
Applications and Concerns of ChatGPT and Other Conversational Large Language Models in Health Care: Systematic Review.J Med Internet Res. 2024 Nov 7;26:e22769. doi: 10.2196/22769. J Med Internet Res. 2024. PMID: 39509695 Free PMC article.
References
-
- Wang S., Ouyang Q., Wang B. Comparative Evaluation of Commercial Large Language Models on PromptBench: An English and Chinese Perspective. Res. Sq. 2024. preprint . - DOI
Grants and funding
LinkOut - more resources
Full Text Sources