Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jun 29;15(13):1657.
doi: 10.3390/diagnostics15131657.

Assessing the Accuracy of Diagnostic Capabilities of Large Language Models

Affiliations

Assessing the Accuracy of Diagnostic Capabilities of Large Language Models

Andrada Elena Urda-Cîmpean et al. Diagnostics (Basel). .

Abstract

Background: In recent years, numerous artificial intelligence applications, especially generative large language models, have evolved in the medical field. This study conducted a structured comparative analysis of four leading generative large language models (LLMs)-ChatGPT-4o (OpenAI), Grok-3 (xAI), Gemini-2.0 Flash (Google), and DeepSeek-V3 (DeepSeek)-to evaluate their diagnostic performance in clinical case scenarios. Methods: We assessed medical knowledge recall and clinical reasoning capabilities through staged, progressively complex cases, with responses graded by expert raters using a 0-5 scale. Results: All models performed better on knowledge-based questions than on reasoning tasks, highlighting the ongoing limitations in contextual diagnostic synthesis. Overall, DeepSeek outperformed the other models, achieving significantly higher scores across all evaluation dimensions (p < 0.05), particularly in regards to medical reasoning tasks. Conclusions: While these findings support the feasibility of using LLMs for medical training and decision support, the study emphasizes the need for improved interpretability, prompt optimization, and rigorous benchmarking to ensure clinical reliability. This structured, comparative approach contributes to ongoing efforts to establish standardized evaluation frameworks for integrating LLMs into diagnostic workflows.

Keywords: artificial intelligence; diagnostic accuracy; large language modes; medical education.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Figures

Figure 1
Figure 1
Flowchart of the methodological steps used in the study.
Figure 2
Figure 2
Conceptual clinical case stage/context complexity and structure.
Figure 3
Figure 3
Question type distribution for each clinical case.
Figure 4
Figure 4
LLMs comparison of mean performance scores (±SD) according to the five criteria used to assess diagnostic capabilities (CG = Chat GPT, GK = Grok, GE = Gemini, and DS = DeepSeek).
Figure 5
Figure 5
LLMs comparison of mean performance scores (±SD) for questions regarding medical knowledge, according to the five criteria (CG = Chat GPT, GK = Grok, GE = Gemini, and DS = DeepSeek).
Figure 6
Figure 6
LLMs comparison of mean performance scores (±SD) for questions regarding medical reasoning, according to the five criteria (CG = Chat GPT, GK = Grok, GE = Gemini, and DS = DeepSeek).

Similar articles

References

    1. Weidener L., Fischer M. Artificial intelligence in medicine: Cross-sectional study among medical students on application, education, and ethical aspects. JMIR Med. Educ. 2024;10:e51247. doi: 10.2196/51247. - DOI - PMC - PubMed
    1. Chow J.C.L., Li K. Developing effective frameworks for large language model–based medical chatbots: Insights from radiotherapy education with ChatGPT. JMIR Cancer. 2025;11:e66633. doi: 10.2196/66633. - DOI - PMC - PubMed
    1. Qiu J., Li L., Sun J., Peng J., Shi P., Zhang R., Dong Y., Lam K., Lo F.P.W., Xiao B., et al. Large AI models in health informatics: Applications, challenges, and the future. IEEE J. Biomed. Health Inform. 2023;27:6074–6089. doi: 10.1109/JBHI.2023.3316750. - DOI - PubMed
    1. Zhuang S., Zeng Y., Lin S., Chen X., Xin Y., Li H., Lin Y., Zhang C., Lin Y. Evaluation of the ability of large language models to self-diagnose oral diseases. iScience. 2024;27:111495. doi: 10.1016/j.isci.2024.111495. - DOI - PMC - PubMed
    1. Wang S., Ouyang Q., Wang B. Comparative Evaluation of Commercial Large Language Models on PromptBench: An English and Chinese Perspective. Res. Sq. 2024. preprint . - DOI

LinkOut - more resources