Almanac - Retrieval-Augmented Language Models for Clinical Medicine
- PMID: 38343631
- PMCID: PMC10857783
- DOI: 10.1056/aioa2300068
Almanac - Retrieval-Augmented Language Models for Clinical Medicine
Abstract
Background: Large language models (LLMs) have recently shown impressive zero-shot capabilities, whereby they can use auxiliary data, without the availability of task-specific training examples, to complete a variety of natural language tasks, such as summarization, dialogue generation, and question answering. However, despite many promising applications of LLMs in clinical medicine, adoption of these models has been limited by their tendency to generate incorrect and sometimes even harmful statements.
Methods: We tasked a panel of eight board-certified clinicians and two health care practitioners with evaluating Almanac, an LLM framework augmented with retrieval capabilities from curated medical resources for medical guideline and treatment recommendations. The panel compared responses from Almanac and standard LLMs (ChatGPT-4, Bing, and Bard) versus a novel data set of 314 clinical questions spanning nine medical specialties.
Results: Almanac showed a significant improvement in performance compared with the standard LLMs across axes of factuality, completeness, user preference, and adversarial safety.
Conclusions: Our results show the potential for LLMs with access to domain-specific corpora to be effective in clinical decision-making. The findings also underscore the importance of carefully testing LLMs before deployment to mitigate their shortcomings. (Funded by the National Institutes of Health, National Heart, Lung, and Blood Institute.).
Figures


Update of
-
Almanac: Retrieval-Augmented Language Models for Clinical Medicine.Res Sq [Preprint]. 2023 May 2:rs.3.rs-2883198. doi: 10.21203/rs.3.rs-2883198/v1. Res Sq. 2023. Update in: NEJM AI. 2024 Feb;1(2). doi: 10.1056/aioa2300068. PMID: 37205549 Free PMC article. Updated. Preprint.
References
-
- Brown TB, Mann B, Ryder N, et al. Language models are few-shot learners. July 22, 2020. (https://arxiv.org/abs/2005.14165). Preprint.
-
- Chen M, Tworek J, Jun H, et al. Evaluating large language models trained on code. July 14, 2021. (https://arxiv.org/abs/2107.03374). Preprint.
-
- Wei C, Xie SM, Ma T. Why do pretrained language models help in downstream tasks? An analysis of head and prompt tuning. June 2021. (https://arxiv.org/abs/2106.09226). Preprint.
-
- Devlin J, Chang M-W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. October 2018. (https://arxiv.org/abs/1810.04805). Preprint.
-
- Wei J, Tay Y, Bommasani R, et al. Emergent abilities of large language models. June 2022. (https://arxiv.org/abs/2206.07682). Preprint.
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources