Professionalism and clinical short answer question marking with machine learning
- PMID: 35879236
- DOI: 10.1111/imj.15839
Professionalism and clinical short answer question marking with machine learning
Abstract
Machine learning may assist in medical student evaluation. This study involved scoring short answer questions administered at three centres. Bidirectional encoder representations from transformers were particularly effective for professionalism question scoring (accuracy ranging from 41.6% to 92.5%). In the scoring of 3-mark professionalism questions, as compared with clinical questions, machine learning had a lower classification accuracy (P < 0.05). The role of machine learning in medical professionalism evaluation warrants further investigation.
Keywords: artificial intelligence; medical education; natural language processing; performance evaluation.
© 2022 Royal Australasian College of Physicians.
References
-
- Kolachalama VB, Garg PS. Machine learning and medical education. NPJ Digit Med 2018; 1: 54.
-
- Blacketer C, Parnis R, Franke KB, Wagner M, Wang D, Tan Y et al. Medical student knowledge and critical appraisal of machine learning: a multicentre international cross-sectional study. Intern Med J 2021; 51: 1539-42.
-
- Dias R, Gupta A, Yule S. Using machine learning to assess physician competence: a systematic review. Acad Med 2018; 94: 427-39.
-
- Prasain B, Bajaj SK. Analysis of Algorithms in Automated Marking in Education: A Proposed Hybrid Algorithm. 2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA). Sydney: Institute of Electrical and Electronics Engineering; 2020; 1-10.
-
- Page EB. The imminence of… Grading essays by computer. The Phi Delta Kappan 1966; 47: 238-43.
MeSH terms
LinkOut - more resources
Full Text Sources