A novel evaluation benchmark for medical LLMs illuminating safety and effectiveness in clinical domains
- PMID: 41454006
- DOI: 10.1038/s41746-025-02277-8
A novel evaluation benchmark for medical LLMs illuminating safety and effectiveness in clinical domains
Abstract
Large language models (LLMs) hold promise in clinical decision support but face major challenges in safety evaluation and effectiveness validation. We developed the Clinical Safety-Effectiveness Dual-Track Benchmark (CSEDB), a multidimensional framework built on clinical expert consensus, encompassing 30 metrics covering critical areas like critical illness recognition, guideline adherence, and medication safety, with weighted consequence measures. Thirty-two specialist physicians developed and revised 2069 open-ended Q&A items aligned with these criteria, spanning 26 clinical departments to simulate real-world scenarios. Benchmark testing of six LLMs revealed moderate overall performance (average total score 57.2%, safety 54.7%, effectiveness 62.3%), with a significant 13.3% performance drop in high-risk scenarios (p < 0.0001). Domain-specific medical LLMs showed consistent performance advantages over general-purpose models, with relatively higher top scores in safety (0.912) and effectiveness (0.861). The findings of this study not only provide a standardized metric for evaluating the clinical application of medical LLMs, facilitating comparative analyses, risk exposure identification, and improvement directions across different scenarios, but also hold the potential to promote safer and more effective deployment of large language models in healthcare environments.
© 2025. The Author(s).
Conflict of interest statement
Competing interests: SW, TG, YW, WS, ZL, KM, DY, HG and LM are employees of Medlinker Intelligent and Digital Technology Co., Ltd, Beijing, China, the developers of the MedGPT model evaluated in this study. These authors contributed to the study concept only. The other authors declare no competing interests.
References
-
- Omiye, J. A., Gui, H., Rezaei, S. J., Zou, J. & Daneshjou, R. Large Language Models in Medicine: The Potentials and Pitfalls: A Narrative Review. Ann. Intern. Med. 177, 210–220 (2024).
-
- McDuff, D. et al. Towards accurate differential diagnosis with large language models. Nature 642, 451–457 (2025).
-
- Bedi, S. et al. Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review. JAMA 333, 319–328 (2025).
-
- Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).
-
- Tordjman, M. et al. Comparative benchmarking of the DeepSeek large language model on medical tasks and clinical reasoning. Nat. Med. https://doi.org/10.1038/s41591-025-03726-3 (2025).
LinkOut - more resources
Full Text Sources
