ChatGPT: is it good for our glaucoma patients?
- PMID: 38983063
- PMCID: PMC11182305
- DOI: 10.3389/fopht.2023.1260415
ChatGPT: is it good for our glaucoma patients?
Abstract
Purpose: Our study investigates ChatGPT and its ability to communicate with glaucoma patients.
Methods: We inputted eight glaucoma-related questions/topics found on the American Academy of Ophthalmology (AAO)'s website into ChatGPT. We used the Flesch-Kincaid test, Gunning Fog Index, SMOG Index, and Dale-Chall readability formula to evaluate the comprehensibility of its responses for patients. ChatGPT's answers were compared with those found on the AAO's website.
Results: ChatGPT's responses required reading comprehension of a higher grade level (average = grade 12.5 ± 1.6) than that of the text on the AAO's website (average = 9.4 grade ± 3.5), (0.0384). For the eight responses, the key ophthalmic terms appeared 34 out of 86 times in the ChatGPT responses vs. 86 out of 86 times in the text on the AAO's website. The term "eye doctor" appeared once in the ChatGPT text, but the formal term "ophthalmologist" did not appear. The term "ophthalmologist" appears 26 times on the AAO's website. The word counts of the answers produced by ChatGPT and those on the AAO's website were similar (p = 0.571), with phrases of a homogenous length.
Conclusion: ChatGPT trains on the texts, phrases, and algorithms inputted by software engineers. As ophthalmologists, through our websites and journals, we should consider encoding the phrase "see an ophthalmologist". Our medical assistants should sit with patients during their appointments to ensure that the text is accurate and that they fully comprehend its meaning. ChatGPT is effective for providing general information such as definitions or potential treatment options for glaucoma. However, ChatGPT has a tendency toward repetitive answers and, due to their elevated readability scores, these could be too difficult for a patient to read.
Keywords: ChatGPT; artificial intelligence; glaucoma; ophthalmology; patient education.
Copyright © 2023 Wu, Lee, Zhao, Wong and Sidhu.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
-
- Kutner M, Greenberg E, Jin Y, et al. Literacy in Everyday Life: Results from the 2003 National Assessment of Adult Literacy [U.S. Department of Education web site] (2007). Available at: https://nces.ed.gov/Pubs2007/2007480_1.pdf (Accessed April 20, 2023).
-
- Camille LR, Bauman K. Educational Attainment in the United States: 2015. [U.S. Census Bureau web site] (2016). Available at: https://www.census.gov/library/publications/2016/demo/p20-578.html (Accessed April 21, 2023).
-
- Hirosawa T, Harada Y, Yokose M, Sakamoto T, Kawamura R, Shimizu T. Diagnostic accuracy of differential-diagnosis lists generated by generative pretrained transformer 3 chatbot for clinical vignettes with common chief complaints: A pilot study. Int J Environ Res Public Health (2023) 20(4):3378. doi: 10.3390/ijerph20043378 - DOI - PMC - PubMed
LinkOut - more resources
Full Text Sources
