The Role of Large Language Models in Medical Education: Applications and Implications
- PMID: 37578830
- PMCID: PMC10463084
- DOI: 10.2196/50945
The Role of Large Language Models in Medical Education: Applications and Implications
Abstract
Large language models (LLMs) such as ChatGPT have sparked extensive discourse within the medical education community, spurring both excitement and apprehension. Written from the perspective of medical students, this editorial offers insights gleaned through immersive interactions with ChatGPT, contextualized by ongoing research into the imminent role of LLMs in health care. Three distinct positive use cases for ChatGPT were identified: facilitating differential diagnosis brainstorming, providing interactive practice cases, and aiding in multiple-choice question review. These use cases can effectively help students learn foundational medical knowledge during the preclinical curriculum while reinforcing the learning of core Entrustable Professional Activities. Simultaneously, we highlight key limitations of LLMs in medical education, including their insufficient ability to teach the integration of contextual and external information, comprehend sensory and nonverbal cues, cultivate rapport and interpersonal interaction, and align with overarching medical education and patient care goals. Through interacting with LLMs to augment learning during medical school, students can gain an understanding of their strengths and weaknesses. This understanding will be pivotal as we navigate a health care landscape increasingly intertwined with LLMs and artificial intelligence.
Keywords: AI; ChatGPT; LLM; artificial intelligence in health care; autoethnography; large language models; medical education.
©Conrad W Safranek, Anne Elizabeth Sidamon-Eristoff, Aidan Gilson, David Chartash. Originally published in JMIR Medical Education (https://mededu.jmir.org), 14.08.2023.
Conflict of interest statement
Conflicts of Interest: None declared.
Figures
References
-
- Introducing ChatGPT. OpenAI. [2023-06-06]. https://openai.com/blog/chatgpt .
-
- Brants T, Popat AC, Xu P, Och FJ, Dean J. Large language models in machine translation. Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning; EMNLP-CoNLL; June 2007; Prague. 2007. pp. 858–867.
-
- Singh S, Mahmood A. The NLP cookbook: modern recipes for transformer based deep learning architectures. IEEE Access. 2021;9:68675–68702. doi: 10.1109/access.2021.3077350. - DOI
-
- Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C, Mishkin P, Zhang C, Agarwal S, Slama K, Ray A, Schulman J, Hilton J, Kelton F, Miller L, Simens M, Askell A, Welinder P, Christiano PF, Leike J, Lowe R. Training language models to follow instructions with human feedback. In: Koyejo S, Mohamed S, Agarwal A, Belgrave D, Cho K, Oh A, editors. Advances in Neural Information Processing Systems 35 (NeurIPS 2022) La Jolla, CA: Neural Information Processing Systems Foundation, Inc; 2022. pp. 27730–27744.
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources
