Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jun 3:19:1763-1769.
doi: 10.2147/OPTH.S513633. eCollection 2025.

Evaluating the Application of Artificial Intelligence and Ambient Listening to Generate Medical Notes in Vitreoretinal Clinic Encounters

Affiliations

Evaluating the Application of Artificial Intelligence and Ambient Listening to Generate Medical Notes in Vitreoretinal Clinic Encounters

Neeket R Patel et al. Clin Ophthalmol. .

Abstract

Purpose: Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.

Subjects: Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.

Methods: Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.

Main outcome measures: The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.

Results: On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).

Conclusion: The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.

Keywords: clinical documentation; large language model; ophthalmology; retina.

PubMed Disclaimer

Conflict of interest statement

Dr Anton Kolomeyer reports personal fees from Astellas (Iveric), Genentech, Regeneron, Alimera Sciences, Apellis, Biogen, Allergen, Oculis, Vial, and Retina Labs, outside the submitted work. Dr Benjamin Kim reports grants from Research to Prevent Blindness and Paul and Evanina Mackall Foundation, during the conduct of the study. The authors report no other conflicts of interest in this work.

Figures

Figure 1
Figure 1
Total PDQI-9 score for each case demonstrated by total score per scenario for both Chat GPT 3.5 and Google Gemini 1.0 Pro.

Similar articles

References

    1. Vo V, Chen G, Aquino YSJ, et al. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: a systematic review and thematic analysis. Soc Sci Med. 2023;338:116357. doi:10.1016/j.socscimed.2023.116357 - DOI - PubMed
    1. Kroth PJ, Morioka-Douglas N, Veres S, et al. Association of electronic health record design and use factors with clinician stress and burnout. JAMA Network Open. 2019;2:e199609. doi:10.1001/jamanetworkopen.2019.9609 - DOI - PMC - PubMed
    1. Bell SK, Delbanco T, Elmore JG, et al. Frequency and types of patient-reported errors in electronic health record ambulatory care notes. JAMA Network Open. 2020;3:e205867. doi:10.1001/jamanetworkopen.2020.5867 - DOI - PMC - PubMed
    1. Florig ST, Corby S, Rosson NT, et al. Chart completion time of attending physicians while using medical scribes. AMIA Annu Symp Proc. 2021;2021:457–465. - PMC - PubMed
    1. Pranaat R, Mohan V, O’Reilly M, et al. Use of simulation based on an electronic health records environment to evaluate the structure and accuracy of notes generated by medical scribes: proof-of-concept study. JMIR Med Inform. 2017;5:e7883. doi:10.2196/medinform.7883 - DOI - PMC - PubMed

LinkOut - more resources