Improving Large Language Models' Summarization Accuracy by Adding Highlights to Discharge Notes: Comparative Evaluation
- PMID: 40705416
- PMCID: PMC12332456
- DOI: 10.2196/66476
Improving Large Language Models' Summarization Accuracy by Adding Highlights to Discharge Notes: Comparative Evaluation
Abstract
Background: The American Medical Association recommends that electronic health record (EHR) notes, often dense and written in nuanced language, be made readable for patients and laypeople, a practice we refer to as the simplification of discharge notes. Our approach to achieving the simplification of discharge notes involves a process of incremental simplification steps to achieve the ideal note. In this paper, we present the first step of this process. Large language models (LLMs) have demonstrated considerable success in text summarization. Such LLM summaries represent the content of EHR notes in an easier-to-read language. However, LLM summaries can also introduce inaccuracies.
Objective: This study aims to test the hypothesis that summaries generated by LLMs from highlighted discharge notes will achieve increased accuracy compared to those generated from the original notes. For this purpose, we aim to prove a hypothesis that summaries generated by LLMs of discharge notes in which detailed information is highlighted are likely to be more accurate than summaries of the original notes.
Methods: To test our hypothesis, we randomly sampled 15 discharge notes from the MIMIC III database and highlighted their detailed information using an interface terminology we previously developed with machine learning. This interface terminology was curated to encompass detailed information from the discharge notes. The highlighted discharge notes distinguished detailed information, specifically the concepts present in the aforementioned interface terminology, by applying a blue background. To calibrate the LLMs' summaries for our simplification goal, we chose GPT-4o and used prompt engineering to ensure high-quality prompts and address issues of output inconsistency and prompt sensitivity. We provided both highlighted and unhighlighted versions of each EHR note along with their corresponding prompts to GPT-4o. Each generated summary was manually evaluated to assess its quality using the following evaluation metrics: completeness, correctness, and structural integrity.
Results: We used the study sample of 15 discharge notes. On average, summaries from highlighted notes (H-summaries) achieved 96% completeness, 8% higher than the summaries from unhighlighted notes (U-summaries). H-summaries had higher completeness in 13 notes, and U-summaries had higher or equal completeness in 2 notes, resulting in P=.01, which implied statistical significance. Moreover, H-summaries demonstrated better correctness than U-summaries, with fewer instances of erroneous information (2 vs 3 errors, respectively). The number of improper headers was smaller for H-summaries for 11 notes and U-summaries for 4 notes (P=.03; implying statistical significance). Moreover, we identified 8 instances of misplaced information in the U-summaries and only 2 in the H-summaries. We showed that our findings supported the hypothesis that summarizing highlighted discharge notes improves the accuracy of the summaries.
Conclusions: Feeding LLMs with highlighted discharge notes, combined with prompt engineering, results in higher-quality summaries in terms of correctness, completeness, and structural integrity compared to unhighlighted discharge notes.
Keywords: AI; ChatGPT; ChatGPT summaries; EHR; EHR summaries; LLM; LLM summaries; accuracy of summaries; artificial intelligence; clinical notes summarization; discharge notes; discharge notes summarization; electronic health record; highlighted EHR notes; large language model.
©Mahshad Koohi Habibi Dehkordi, Yehoshua Perl, Fadi P Deek, Zhe He, Vipina K Keloth, Hao Liu, Gai Elhanan, Andrew J Einstein. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 24.07.2025.
Conflict of interest statement
Conflicts of Interest: None declared.
Figures




Similar articles
-
A dataset and benchmark for hospital course summarization with adapted large language models.J Am Med Inform Assoc. 2025 Mar 1;32(3):470-479. doi: 10.1093/jamia/ocae312. J Am Med Inform Assoc. 2025. PMID: 39786555
-
A comparative study of recent large language models on generating hospital discharge summaries for lung cancer patients.J Biomed Inform. 2025 Aug;168:104867. doi: 10.1016/j.jbi.2025.104867. Epub 2025 Jun 20. J Biomed Inform. 2025. PMID: 40544901
-
The potential of Generative Pre-trained Transformer 4 (GPT-4) to analyse medical notes in three different languages: a retrospective model-evaluation study.Lancet Digit Health. 2025 Jan;7(1):e35-e43. doi: 10.1016/S2589-7500(24)00246-2. Lancet Digit Health. 2025. PMID: 39722251 Free PMC article.
-
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3. Cochrane Database Syst Rev. 2022. PMID: 35593186 Free PMC article.
-
Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods.Cochrane Database Syst Rev. 2015 Jul 27;2015(7):MR000042. doi: 10.1002/14651858.MR000042.pub2. Cochrane Database Syst Rev. 2015. PMID: 26212714 Free PMC article.
References
-
- Seymour T, Frantsvog D, Graeber T. Electronic health records (EHR) Am J Health Sci. 2012 Jul 13;3(3):201–10. doi: 10.19030/ajhs.v3i3.7139. https://www.researchgate.net/publication/267226700_Electronic_Health_Rec... - DOI
-
- Polepalli Ramesh B, Houston T, Brandt C, Fang H, Yu H. Improving patients' electronic health record comprehension with NoteAid. Stud Health Technol Inform. 2013;192:714–8. - PubMed
-
- Magid SK, Cohen K, Katzovitz LS. 21 Century Cures Act, an information technology-led organizational initiative. HSS J. 2022 Mar;18(1):42–7. doi: 10.1177/15563316211041613. https://journals.sagepub.com/doi/abs/10.1177/15563316211041613?url_ver=Z... 10.1177_15563316211041613 - DOI - DOI - PMC - PubMed
-
- McCray AT, Loane RF, Browne AC, Bangalore AK. Terminology issues in user access to web-based medical information. Proc AMIA Symp. 1999:107–11. https://europepmc.org/abstract/MED/10566330 D005626 - PMC - PubMed
-
- Weiss BD. Health literacy: a manual for clinicians. American Medical Association. [2025-06-29]. http://lib.ncfh.org/pdfs/6617.pdf .
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources