Medical large language models are vulnerable to data-poisoning attacks
- PMID: 39779928
- PMCID: PMC11835729
- DOI: 10.1038/s41591-024-03445-1
Medical large language models are vulnerable to data-poisoning attacks
Abstract
The adoption of large language models (LLMs) in healthcare demands a careful analysis of their potential to spread false medical knowledge. Because LLMs ingest massive volumes of data from the open Internet during training, they are potentially exposed to unverified medical knowledge that may include deliberately planted misinformation. Here, we perform a threat assessment that simulates a data-poisoning attack against The Pile, a popular dataset used for LLM development. We find that replacement of just 0.001% of training tokens with medical misinformation results in harmful models more likely to propagate medical errors. Furthermore, we discover that corrupted models match the performance of their corruption-free counterparts on open-source benchmarks routinely used to evaluate medical LLMs. Using biomedical knowledge graphs to screen medical LLM outputs, we propose a harm mitigation strategy that captures 91.9% of harmful content (F1 = 85.7%). Our algorithm provides a unique method to validate stochastically generated LLM outputs against hard-coded relationships in knowledge graphs. In view of current calls for improved data provenance and transparent LLM development, we hope to raise awareness of emergent risks from LLMs trained indiscriminately on web-scraped data, particularly in healthcare where misinformation can potentially compromise patient safety.
© 2025. The Author(s).
Conflict of interest statement
Competing interests: D.A.A. and E.K.O. report consulting with Sofinnova Partners. E.K.O. reports consulting with Google, income from Merck & Co. and Mirati Therapeutics, and equity in Artisight. The other authors declare no competing interests.
Figures
References
-
- Babbage, C. Passages from the Life of a Philosopher (Theclassics, 2013).
-
- Brown, T. B. Language models are few-shot learners. Preprint at https://arxiv.org/abs/2005.14165 (2020).
-
- Bubeck, S. et al. Sparks of artificial general intelligence: early experiments with GPT-4. Preprint at https://arxiv.org/abs/2303.12712 (2023).
-
- Touvron, H. et al. LLaMA: open and efficient foundation language models. Preprint at https://arxiv.org/abs/2302.13971 (2023).
-
- Soldaini, L. AI2 Dolma: 3 trillion token open corpus for LLMs. AI2 Blog. https://blog.allenai.org/dolma-3-trillion-tokens-open-llm-corpus-9a0ff4b... (2023).
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
