Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jan 28;333(4):319-328.
doi: 10.1001/jama.2024.21700.

Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review

Affiliations

Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review

Suhana Bedi et al. JAMA. .

Abstract

Importance: Large language models (LLMs) can assist in various health care activities, but current evaluation approaches may not adequately identify the most useful application areas.

Objective: To summarize existing evaluations of LLMs in health care in terms of 5 components: (1) evaluation data type, (2) health care task, (3) natural language processing (NLP) and natural language understanding (NLU) tasks, (4) dimension of evaluation, and (5) medical specialty.

Data sources: A systematic search of PubMed and Web of Science was performed for studies published between January 1, 2022, and February 19, 2024.

Study selection: Studies evaluating 1 or more LLMs in health care.

Data extraction and synthesis: Three independent reviewers categorized studies via keyword searches based on the data used, the health care tasks, the NLP and NLU tasks, the dimensions of evaluation, and the medical specialty.

Results: Of 519 studies reviewed, published between January 1, 2022, and February 19, 2024, only 5% used real patient care data for LLM evaluation. The most common health care tasks were assessing medical knowledge such as answering medical licensing examination questions (44.5%) and making diagnoses (19.5%). Administrative tasks such as assigning billing codes (0.2%) and writing prescriptions (0.2%) were less studied. For NLP and NLU tasks, most studies focused on question answering (84.2%), while tasks such as summarization (8.9%) and conversational dialogue (3.3%) were infrequent. Almost all studies (95.4%) used accuracy as the primary dimension of evaluation; fairness, bias, and toxicity (15.8%), deployment considerations (4.6%), and calibration and uncertainty (1.2%) were infrequently measured. Finally, in terms of medical specialty area, most studies were in generic health care applications (25.6%), internal medicine (16.4%), surgery (11.4%), and ophthalmology (6.9%), with nuclear medicine (0.6%), physical medicine (0.4%), and medical genetics (0.2%) being the least represented.

Conclusions and relevance: Existing evaluations of LLMs mostly focus on accuracy of question answering for medical examinations, without consideration of real patient care data. Dimensions such as fairness, bias, and toxicity and deployment considerations received limited attention. Future evaluations should adopt standardized applications and metrics, use clinical data, and broaden focus to include a wider range of tasks and specialties.

PubMed Disclaimer

Conflict of interest statement

Conflict of Interest Disclosures: Dr Callahan reported receiving consultant fees from Atropos Health LLC outside the submitted work. Dr Lehmann reported being formerly employed by Google outside the submitted work. Dr N. R. Shah reported being co-founder of start-up company for AI in health care Qualified Health PBC outside the submitted work. Dr Singh reported receiving grants from the National Institute of Diabetes and Digestive and Kidney Diseases for their institution, consulting fees from Flatiron Health, and grants from Blue Cross Blue Shield of Michigan for their institution outside the submitted work. Dr Milstein reported honoraria for meeting participation from the Peterson Center of Healthcare, funded by a charitable foundation, having stock/options from Emsana Health, Amino Health, FNF Advisors, JRSL LLC, Embold, EZPT/Somatic Health, and Prealize outside the submitted work; and being a member of the Leapfrog Group Board Intermountain Healthcare Board. Dr N. H. Shah reported being a co-founder of Prealize Health (a predictive analytics company) and Atropos Health (an on-demand evidence generation company); receiving funding from the Gordon and Betty Moore Foundation for developing virtual model deployments; and being a member of the board of directors of the Coalition for Healthcare AI, a consensus-building organization providing guidelines for the responsible use of artificial intelligence in health care. No other disclosures were reported.

Figures

Figure 1.
Figure 1.. Selection of Studies in Systematic Review of the Testing and Evaluation of Large Language Models (LLMs)
Figure 2.
Figure 2.. Heat Map of Health Care Tasks, Natural Language Processing (NLP) and Natural Language Understanding (NLU) Tasks, and Dimensions of Evaluation Across 519 Studies
The sum of tasks and dimensions of evaluation exceeds 519 because a single study may include multiple tasks and/or dimensions of evaluation.

References

    1. Stafie CS, Sufaru IG, Ghiciuc CM, et al. Exploring the intersection of artificial intelligence and clinical healthcare: a multidisciplinary review. Diagnostics (Basel). 2023;13(12):1995. doi: 10.3390/diagnostics13121995 - DOI - PMC - PubMed
    1. Kohane IS. Injecting artificial intelligence into medicine. NEJM AI. 2024;1(1). doi: 10.1056/AIe2300197 - DOI
    1. Goldberg CB, Adams L, Blumenthal D, et al. To do no harm — and the most good — with AI in health care. NEJM AI. 2024;1(3). doi: 10.1056/AIp2400036 - DOI - PubMed
    1. Wachter RM, Brynjolfsson E. Will generative artificial intelligence deliver on its promise in health care? JAMA. 2024;331(1):65-69. doi: 10.1001/jama.2023.25054 - DOI - PubMed
    1. Liu Y, Zhang K, Li Y, et al. Sora: a review on background, technology, limitations, and opportunities of large vision models. arXiv. Preprint published online February 27, 2024. 10.48550/arXiv.2402.17177 - DOI

Publication types

MeSH terms