This is a preprint.
Automating Evaluation of AI Text Generation in Healthcare with a Large Language Model (LLM)-as-a-Judge
- PMID: 40313300
- PMCID: PMC12045442
- DOI: 10.1101/2025.04.22.25326219
Automating Evaluation of AI Text Generation in Healthcare with a Large Language Model (LLM)-as-a-Judge
Abstract
Electronic Health Records (EHRs) store vast amounts of clinical information that are difficult for healthcare providers to summarize and synthesize relevant details to their practice. To reduce cognitive load on providers, generative AI with Large Language Models have emerged to automatically summarize patient records into clear, actionable insights and offload the cognitive burden for providers. However, LLM summaries need to be precise and free from errors, making evaluations on the quality of the summaries necessary. While human experts are the gold standard for evaluations, their involvement is time-consuming and costly. Therefore, we introduce and validate an automated method for evaluating real-world EHR multi-document summaries using an LLM as the evaluator, referred to as LLM-as-a-Judge. Benchmarking against the validated Provider Documentation Summarization Quality Instrument (PDSQI)-9 for human evaluation, our LLM-as-a-Judge framework demonstrated strong inter-rater reliability with human evaluators. GPT-o3-mini achieved the highest intraclass correlation coefficient of 0.818 (95% CI 0.772, 0.854), with a median score difference of 0 from human evaluators, and completes evaluations in just 22 seconds. Overall, the reasoning models excelled in inter-rater reliability, particularly in evaluations that require advanced reasoning and domain expertise, outperforming non-reasoning models, those trained on the task, and multi-agent workflows. Cross-task validation on the Problem Summarization task similarly confirmed high reliability. By automating high-quality evaluations, medical LLM-as-a-Judge offers a scalable, efficient solution to rapidly identify accurate and safe AI-generated summaries in healthcare settings.
Conflict of interest statement
Competing Interests Statement The authors have no competing interests to declare.
Figures
References
-
- Ben-Assuli O., Sagi D., Leshno M., Ironi A. & Ziv A. Improving diagnostic accuracy using EHR in emergency departments: A simulation-based study. eng. Journal of Biomedical Informatics 55, 31–40. issn: 1532–0480 (June 2015). - PubMed
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources