Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations
- PMID: 37191485
- DOI: 10.1148/radiol.230582
Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations
Abstract
Background ChatGPT is a powerful artificial intelligence large language model with great potential as a tool in medical practice and education, but its performance in radiology remains unclear. Purpose To assess the performance of ChatGPT on radiology board-style examination questions without images and to explore its strengths and limitations. Materials and Methods In this exploratory prospective study performed from February 25 to March 3, 2023, 150 multiple-choice questions designed to match the style, content, and difficulty of the Canadian Royal College and American Board of Radiology examinations were grouped by question type (lower-order [recall, understanding] and higher-order [apply, analyze, synthesize] thinking) and topic (physics, clinical). The higher-order thinking questions were further subclassified by type (description of imaging findings, clinical management, application of concepts, calculation and classification, disease associations). ChatGPT performance was evaluated overall, by question type, and by topic. Confidence of language in responses was assessed. Univariable analysis was performed. Results ChatGPT answered 69% of questions correctly (104 of 150). The model performed better on questions requiring lower-order thinking (84%, 51 of 61) than on those requiring higher-order thinking (60%, 53 of 89) (P = .002). When compared with lower-order questions, the model performed worse on questions involving description of imaging findings (61%, 28 of 46; P = .04), calculation and classification (25%, two of eight; P = .01), and application of concepts (30%, three of 10; P = .01). ChatGPT performed as well on higher-order clinical management questions (89%, 16 of 18) as on lower-order questions (P = .88). It performed worse on physics questions (40%, six of 15) than on clinical questions (73%, 98 of 135) (P = .02). ChatGPT used confident language consistently, even when incorrect (100%, 46 of 46). Conclusion Despite no radiology-specific pretraining, ChatGPT nearly passed a radiology board-style examination without images; it performed well on lower-order thinking questions and clinical management questions but struggled with higher-order thinking questions involving description of imaging findings, calculation and classification, and application of concepts. © RSNA, 2023 See also the editorial by Lourenco et al and the article by Bhayana et al in this issue.
Comment in
-
Rise of ChatGPT: It May Be Time to Reassess How We Teach and Test Radiology Residents.Radiology. 2023 Jun;307(5):e231053. doi: 10.1148/radiol.231053. Epub 2023 May 16. Radiology. 2023. PMID: 37191490 No abstract available.
-
Response to Performance of ChatGPT on a Radiology Board-style Examination.Radiology. 2023 Jun;307(5):e231330. doi: 10.1148/radiol.231330. Radiology. 2023. PMID: 37338357 No abstract available.
-
ChatGPT in Radiology: Evaluating Proficiencies, Addressing Shortcomings, and Proposing Integrative Approaches for the Future.Radiology. 2023 Jul;308(1):e231335. doi: 10.1148/radiol.231335. Radiology. 2023. PMID: 37432082 No abstract available.
MeSH terms
LinkOut - more resources
Full Text Sources