Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2025 May 1;8(5):e2512994.
doi: 10.1001/jamanetworkopen.2025.12994.

Dedicated AI Expert System vs Generative AI With Large Language Model for Clinical Diagnoses

Affiliations
Comparative Study

Dedicated AI Expert System vs Generative AI With Large Language Model for Clinical Diagnoses

Mitchell J Feldman et al. JAMA Netw Open. .

Abstract

Importance: Large language models (LLMs) have not yet been compared with traditional diagnostic decision support systems (DDSSs) on unpublished clinical cases.

Objective: To compare the performance of 2 widely used LLMs (ChatGPT, version 4 [hereafter, LLM1] and Gemini, version 1.5 [hereafter, LLM2]) with a DDSS (DXplain [hereafter, DDSS]) on 36 unpublished general medicine cases.

Design, setting, and participants: This diagnostic study, conducted from October 6, 2023, to November 22, 2024, looked for the presence of the known case diagnosis in the differential diagnoses of the LLMs and DDSS after data from previously unpublished clinical cases from 3 academic medical centers were entered. The systems' performance was assessed both with and without laboratory test data. Each case was reviewed by 3 physicians blinded to the case diagnosis. Physicians identified all clinical findings as well as the subset deemed relevant to making the diagnosis for mapping to the DDSS's controlled vocabulary. Two other physicians, also blinded to the diagnoses, entered the data from these cases into the DDSS, LLM1, and LLM2.

Exposures: All cases were entered into each LLM twice, with and without laboratory test results. For the DDSS, each case was entered 4 times: for all findings and for findings relevant to the diagnosis, each with and without laboratory test results. The top 25 diagnoses in each resulting differential diagnosis were reviewed.

Main outcomes and measures: Presence or absence of the case diagnosis in the system's differential diagnosis and, when present, in which quintile it appeared in the top 25 diagnoses.

Results: Among 36 patient cases of various races and ethnicities, genders, and ages (mean [SD] age, 51.4 [16.4] years), in the version with all findings but no laboratory test results, the DDSS listed the case diagnosis in its differential diagnosis more often (56% [20 of 36]) than LLM1 (42% [15 of 36]) and LLM2 (39% [14 of 36]), although this difference did not reach statistical significance (DDSS vs LLMI, P = .09; DDSS vs LLM2, P = .08). All 3 systems listed the case diagnosis in most cases if laboratory test results were included (all findings DDSS, 72% [26 of 36]; LLM1, 64% [23 of 36]; and LLM2, 58% [21 of 36]).

Conclusions and relevance: In this diagnostic study comparing the performance of a traditional DDSS and current LLMs on unpublished clinical cases, in most cases, every system listed the case diagnosis in their top 25 diagnoses if laboratory test results were included. A hybrid approach that combines the parsing and expository linguistic capabilities of LLMs with the deterministic and explanatory capabilities of traditional DDSSs may produce synergistic benefits.

PubMed Disclaimer

Conflict of interest statement

Conflict of Interest Disclosures: None reported.

Figures

Figure.
Figure.. Diagnostic Decision Support System (DDSS) vs Large Language Models (LLMs)
Comparison of performance across systems for placing the correct diagnosis higher up on a differential diagnosis consisting of 25 diagnoses. DDSS ALL indicates the DDSS with all clinical data; DDSS REL, the DDSS with only clinical data considered relevant.

References

    1. Howell MD, Corrado GS, DeSalvo KB. Three epochs of artificial intelligence in health care. JAMA. 2024;331(3):242-244. doi:10.1001/jama.2023.25057 - DOI - PubMed
    1. Haug CJ, Drazen JM. Artificial intelligence and machine learning in clinical medicine, 2023. N Engl J Med. 2023;388(13):1201-1208. doi:10.1056/NEJMra2302038 - DOI - PubMed
    1. Kung TH, Cheatham M, Medinilla A, et al. . Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. doi:10.1371/journal.pdig.0000198 - DOI - PMC - PubMed
    1. Beam K, Sharma P, Kumar B, et al. . Performance of a large language model on practice questions for the neonatal board examination. JAMA Pediatr. 2023;177(9):977-979. doi:10.1001/jamapediatrics.2023.2373 - DOI - PMC - PubMed
    1. Hirosawa T, Mizuta K, Harada Y, Shimizu T. Comparative evaluation of diagnostic accuracy between Google Bard and physicians. Am J Med. 2023;136(11):1119-1123. doi:10.1016/j.amjmed.2023.08.003 - DOI - PubMed

Publication types