Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
- PMID: 37261894
- PMCID: PMC10273039
- DOI: 10.2196/48291
Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
Abstract
The integration of large language models (LLMs), such as those in the Generative Pre-trained Transformers (GPT) series, into medical education has the potential to transform learning experiences for students and elevate their knowledge, skills, and competence. Drawing on a wealth of professional and academic experience, we propose that LLMs hold promise for revolutionizing medical curriculum development, teaching methodologies, personalized study plans and learning materials, student assessments, and more. However, we also critically examine the challenges that such integration might pose by addressing issues of algorithmic bias, overreliance, plagiarism, misinformation, inequity, privacy, and copyright concerns in medical education. As we navigate the shift from an information-driven educational paradigm to an artificial intelligence (AI)-driven educational paradigm, we argue that it is paramount to understand both the potential and the pitfalls of LLMs in medical education. This paper thus offers our perspective on the opportunities and challenges of using LLMs in this context. We believe that the insights gleaned from this analysis will serve as a foundation for future recommendations and best practices in the field, fostering the responsible and effective use of AI technologies in medical education.
Keywords: ChatGPT; GPT-4; artificial intelligence; educators; generative AI; large language models; medical education; students.
©Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh. Originally published in JMIR Medical Education (https://mededu.jmir.org), 01.06.2023.
Conflict of interest statement
Conflicts of Interest: A Abd-alrazaq is an Associate Editor of
Figures
References
-
- OpenAI GPT-4 technical report. arXiv. Preprint posted online on March 27, 2023. https://cdn.openai.com/papers/gpt-4.pdf
-
- Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I. Zero-shot text-to-image generation. Proc Mach Learn Res; 38th International Conference on Machine Learning; July 18-24, 2021; Virtual. 2021. pp. 8821–8831.
-
- Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, Xiao T, Whitehead S, Berg AC, Lo WY, Dollár P, Girshick R. Segment anything. arXiv. Preprint posted online on April 5, 2023. https://arxiv.org/pdf/2304.02643.pdf
-
- Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A, Joulin A, Grave E, Lample G. LLaMA: Open and efficient foundation language models. arXiv. Preprint posted online on February 27, 2023. https://arxiv.org/pdf/2302.13971.pdf
-
- Thoppilan R, De Freitas D, Hall J, Shazeer N, Kulshreshtha A, Cheng HT, Jin A, Bos T, Baker L, Du Y, Li Y, Lee H, Zheng HS, Ghafouri A, Menegali M, Huang Y, Krikun M, Lepikhin D, Qin J, Chen D, Xu Y, Chen Z, Roberts A, Bosma M, Zhao V, Zhou Y, Chang CC, Krivokon I, Rusch W, Pickett M, Srinivasan P, Man L, Meier-Hellstern K, Morris MR, Doshi T, Santos RD, Duke T, Soraker J, Zevenbergen B, Prabhakaran V, Diaz M, Hutchinson B, Olson K, Molina A, Hoffman-John E, Lee J, Aroyo L, Rajakumar R, Butryna A, Lamm M, Kuzmina V, Fenton J, Cohen A, Bernstein, R, Kurzweil R, Aguera-Arcas B, Cui C, Croak M, Chi E, Le Q. LaMDA: Language models for dialog applications. arXiv. Preprint posted online on February 10, 2022. https://arxiv.org/pdf/2201.08239.pdf
LinkOut - more resources
Full Text Sources
Research Materials
