Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb;1(2):10.1056/aioa2300068.
doi: 10.1056/aioa2300068. Epub 2024 Jan 25.

Almanac - Retrieval-Augmented Language Models for Clinical Medicine

Affiliations

Almanac - Retrieval-Augmented Language Models for Clinical Medicine

Cyril Zakka et al. NEJM AI. 2024 Feb.

Abstract

Background: Large language models (LLMs) have recently shown impressive zero-shot capabilities, whereby they can use auxiliary data, without the availability of task-specific training examples, to complete a variety of natural language tasks, such as summarization, dialogue generation, and question answering. However, despite many promising applications of LLMs in clinical medicine, adoption of these models has been limited by their tendency to generate incorrect and sometimes even harmful statements.

Methods: We tasked a panel of eight board-certified clinicians and two health care practitioners with evaluating Almanac, an LLM framework augmented with retrieval capabilities from curated medical resources for medical guideline and treatment recommendations. The panel compared responses from Almanac and standard LLMs (ChatGPT-4, Bing, and Bard) versus a novel data set of 314 clinical questions spanning nine medical specialties.

Results: Almanac showed a significant improvement in performance compared with the standard LLMs across axes of factuality, completeness, user preference, and adversarial safety.

Conclusions: Our results show the potential for LLMs with access to domain-specific corpora to be effective in clinical decision-making. The findings also underscore the importance of carefully testing LLMs before deployment to mitigate their shortcomings. (Funded by the National Institutes of Health, National Heart, Lung, and Blood Institute.).

PubMed Disclaimer

Figures

Figure 1.
Figure 1.. Almanac Overview.
When presented with a query, Almanac uses external tools to retrieve relevant information before synthesizing a response with citations referencing source material. With this framework, large language model (LLM) outputs remain grounded in truth while providing a reliable way of fact-checking.
Figure 2.
Figure 2.. Heat Maps of the Nemenyi P Values for Factuality, Completeness, and Preference for Model Pairs across ClinicalQA.
Red denotes significant differences at P<0.01; blue denotes nonsignificant differences.

Update of

References

    1. Brown TB, Mann B, Ryder N, et al. Language models are few-shot learners. July 22, 2020. (https://arxiv.org/abs/2005.14165). Preprint.
    1. Chen M, Tworek J, Jun H, et al. Evaluating large language models trained on code. July 14, 2021. (https://arxiv.org/abs/2107.03374). Preprint.
    1. Wei C, Xie SM, Ma T. Why do pretrained language models help in downstream tasks? An analysis of head and prompt tuning. June 2021. (https://arxiv.org/abs/2106.09226). Preprint.
    1. Devlin J, Chang M-W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. October 2018. (https://arxiv.org/abs/1810.04805). Preprint.
    1. Wei J, Tay Y, Bommasani R, et al. Emergent abilities of large language models. June 2022. (https://arxiv.org/abs/2206.07682). Preprint.

LinkOut - more resources