ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)
- PMID: 37643186
- PMCID: PMC10464959
- DOI: 10.1371/journal.pone.0290691
ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)
Erratum in
-
Correction: ChatGPT versus human in generating medical graduate exam multiple choice questions-A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom).PLoS One. 2025 Jun 27;20(6):e0327290. doi: 10.1371/journal.pone.0327290. eCollection 2025. PLoS One. 2025. PMID: 40577328 Free PMC article.
Abstract
Introduction: Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.
Methods: 50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.
Results: The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.
Conclusion: ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.
Copyright: © 2023 Cheung et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Conflict of interest statement
No.
Figures
Similar articles
-
Examining the Role of Artificial Intelligence in Assessment: A Comparative Study of ChatGPT and Educator-Generated Multiple-Choice Questions in a Dental Exam.Eur J Dent Educ. 2025 Aug 10. doi: 10.1111/eje.70034. Online ahead of print. Eur J Dent Educ. 2025. PMID: 40785272
-
The educational effects of portfolios on undergraduate student learning: a Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 11.Med Teach. 2009 Apr;31(4):282-98. doi: 10.1080/01421590902889897. Med Teach. 2009. PMID: 19404891
-
AI in radiography education: Evaluating multiple-choice questions difficulty and discrimination.J Med Imaging Radiat Sci. 2025 Jul;56(4):101896. doi: 10.1016/j.jmir.2025.101896. Epub 2025 Mar 28. J Med Imaging Radiat Sci. 2025. PMID: 40157013
-
Artificial intelligence in radiology examinations: a psychometric comparison of question generation methods.Diagn Interv Radiol. 2025 Jul 21. doi: 10.4274/dir.2025.253407. Online ahead of print. Diagn Interv Radiol. 2025. PMID: 40686400
-
Eliciting adverse effects data from participants in clinical trials.Cochrane Database Syst Rev. 2018 Jan 16;1(1):MR000039. doi: 10.1002/14651858.MR000039.pub2. Cochrane Database Syst Rev. 2018. PMID: 29372930 Free PMC article.
Cited by
-
Comparison of AI-generated and clinician-designed multiple-choice questions in emergency medicine exam: a psychometric analysis.BMC Med Educ. 2025 Jul 1;25(1):949. doi: 10.1186/s12909-025-07528-6. BMC Med Educ. 2025. PMID: 40597998 Free PMC article.
-
Applications of Artificial Intelligence in Medical Education: A Systematic Review.Cureus. 2025 Mar 1;17(3):e79878. doi: 10.7759/cureus.79878. eCollection 2025 Mar. Cureus. 2025. PMID: 40034416 Free PMC article. Review.
-
Evaluating the Efficacy of Artificial Intelligence-Driven Chatbots in Addressing Queries on Vernal Conjunctivitis.Cureus. 2025 Feb 26;17(2):e79688. doi: 10.7759/cureus.79688. eCollection 2025 Feb. Cureus. 2025. PMID: 40161163 Free PMC article.
-
Through a Glass Darkly: Perceptions of Ethnoracial Identity in Artificial Intelligence Generated Medical Vignettes and Images.Med Sci Educ. 2025 Feb 27;35(3):1473-1488. doi: 10.1007/s40670-025-02332-9. eCollection 2025 Jun. Med Sci Educ. 2025. PMID: 40625992 Free PMC article.
-
Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training.BMC Med Educ. 2024 Dec 28;24(1):1544. doi: 10.1186/s12909-024-06592-8. BMC Med Educ. 2024. PMID: 39732679 Free PMC article. Review.
References
-
- Chen L, Chen P, Lin Z. Artificial Intelligence in Education: A Review. IEEE Access. 2020;8:75264–78.
MeSH terms
LinkOut - more resources
Full Text Sources