Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2025 Apr 1;8(4):e256359.
doi: 10.1001/jamanetworkopen.2025.6359.

Semantic Clinical Artificial Intelligence vs Native Large Language Model Performance on the USMLE

Affiliations
Comparative Study

Semantic Clinical Artificial Intelligence vs Native Large Language Model Performance on the USMLE

Peter L Elkin et al. JAMA Netw Open. .

Abstract

Importance: Large language models (LLMs) are being implemented in health care. Enhanced accuracy and methods to maintain accuracy over time are needed to maximize LLM benefits.

Objective: To evaluate whether LLM performance on the US Medical Licensing Examination (USMLE) can be improved by including formally represented semantic clinical knowledge.

Design, setting, and participants: This comparative effectiveness research study was conducted between June 2024 and February 2025 at the Department of Biomedical Informatics, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, New York, using sample questions from the USMLE Steps 1, 2, and 3.

Intervention: Semantic clinical artificial intelligence (SCAI) was developed to insert formally represented semantic clinical knowledge into LLMs using retrieval augmented generation (RAG).

Main outcomes and measures: The SCAI method was evaluated by comparing the performance of 3 Llama LLMs (13B, 70B, and 405B; Meta) with and without SCAI RAG on text-based questions from the USMLE Steps 1, 2, and 3. LLM accuracy for answering questions was determined by comparing the LLM output with the USMLE answer key.

Results: The LLMs were tested on 87 questions in the USMLE Step 1, 103 in Step 2, and 123 in Step 3. The 13B LLM enhanced by SCAI RAG was associated with significantly improved performance on Steps 1 and 3 but only met the 60% passing threshold on Step 3 (74 questions correct [60.2%]). The 70B and 405B LLMs passed all the USMLE steps with and without SCAI RAG. The SCAI RAG 70B model scored 80 questions (92.0%) correctly on Step 1, 82 (79.6%) on Step 2, and 112 (91.1%) on Step 3. The SCAI RAG 405B model scored 79 (90.8%) correctly on Step 1, 87 (84.5%) on Step 2, and 117 (95.1%) on Step 3. Significant improvements associated with SCAI RAG were found for the 13B model on Steps 1 and 3, the 70B model on Step 2, and the 405B parameter model on Step 3. The 70B model was significantly better than the 13B model, and the 405B model was not significantly better than the 70B model.

Conclusions and relevance: In this comparative effectiveness research study, SCAI RAG was associated with significantly improved scores on the USMLE Steps 1, 2, and 3. The 13B model passed Step 3 with RAG, and the 70B and 405B models passed and scored well on Steps 1, 2, and 3 with or without augmentation. New forms of reasoning by LLMs, like semantic reasoning, have potential to improve the accuracy of LLM performance on important medical questions. Improving LLM performance in health care with targeted, up-to-date clinical knowledge is an important step in LLM implementation and acceptance.

PubMed Disclaimer

Conflict of interest statement

Conflict of Interest Disclosures: Prof Elkin reported receiving grants from the National Institutes of Health (NIH) during the conduct of the study. Mr Mehta reported receiving grants from the NIH during the conduct of the study. Mr LeHouillier reported receiving grants from the National Library of Medicine (NLM) during the conduct of the study. Dr Mullin reported receiving a T15 postdoctoral training grant fellowship from the NLM during the conduct of the study. No other disclosures were reported.

Figures

Figure 1.
Figure 1.. Semantic Triples and Knowledge Graphs
The knowledge graphs may contain 1 or more semantic triples. Two semantic triples are combined at the lower right of the figure. Numbers signify Systematized Nomenclature of Medicine–Clinical Terms codes. OWL indicates W3C Web Ontology Language.
Figure 2.
Figure 2.. Clinical Knowledge Training and Retrieval Augmented Generation (RAG) Prompt Generation for Semantic Clinical Artificial Intelligence (SCAI) Implementation
Numbered circles represent data flows between algorithms and/or data stores, which are represented by boxes with lowercase letters. GRAPH DB indicates graph database; HD-NLP, high-definition natural language processing; LLM, large language model; SPL, structured product labeling; USMLE, United States Medical Licensing Examination.

References

    1. Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pretraining. 2018. Accessed March 18, 2025. https://web.archive.org/web/20230314170216/https://cdn.openai.com/resear...
    1. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners. OpenAI Blog. 2019. Accessed March 18, 2025. https://cdn.openai.com/better-language-models/language_models_are_unsupe...
    1. Devlin J, Chang MW, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv. Preprint posted online May 24, 2019. doi:10.48550/arXiv.1810.04805 - DOI
    1. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv. Preprint posted online September 6, 2013. doi:10.48550/arXiv.1301.3781 - DOI
    1. Vaswani A, Shazeer N, Parmar N, et al. . Attention is all you need. arXiv. Preprint posted online August 1, 2023. doi:10.48550/arXiv.1706.03762 - DOI

Publication types