A narrative review of adaptive testing and its application to medical education
- PMID: 38028657
- PMCID: PMC10680016
- DOI: 10.12688/mep.19844.1
A narrative review of adaptive testing and its application to medical education
Abstract
Adaptive testing has a long but largely unrecognized history. The advent of computer-based testing has created new opportunities to incorporate adaptive testing into conventional programmes of study. Relatively recently software has been developed that can automate the delivery of summative assessments that adapt by difficulty or content. Both types of adaptive testing require a large item bank that has been suitably quality assured. Adaptive testing by difficulty enables more reliable evaluation of individual candidate performance, although at the expense of transparency in decision making, and requiring unidirectional navigation. Adaptive testing by content enables reduction in compensation and targeted individual support to enable assurance of performance in all the required outcomes, although at the expense of discovery learning. With both types of adaptive testing, candidates are presented a different set of items to each other, and there is the potential for that to be perceived as unfair. However, when candidates of different abilities receive the same items, they may receive too many they can answer with ease, or too many that are too difficult to answer. Both situations may be considered unfair as neither provides the opportunity to demonstrate what they know. Adapting by difficulty addresses this. Similarly, when everyone is presented with the same items, but answer different items incorrectly, not providing individualized support and opportunity to demonstrate performance in all the required outcomes by revisiting content previously answered incorrectly could also be considered unfair; a point addressed when adapting by content. We review the educational rationale behind the evolution of adaptive testing and consider its inherent strengths and limitations. We explore the continuous pursuit of improvement of examination methodology and how software can facilitate personalized assessment. We highlight how this can serve as a catalyst for learning and refinement of curricula; fostering engagement of learner and educator alike.
Keywords: Assessment; adaptive testing; compensation; different questions; fairness; personalised; progress testing; reliability.
Copyright: © 2023 Burr SA et al.
Conflict of interest statement
No competing interests were disclosed.
References
-
- Armitage P: Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis. J R Stat Soc Series B Stat Methodol. 1950;12(1):137–144. 10.1111/j.2517-6161.1950.tb00050.x - DOI
-
- Bennett RE: Cognitively Based Assessment of for, and as Learning (CBAL): A Preliminary Theory of Action for Summative, and Formative Assessment. Measurement. 2010;8(2–3):70–91. 10.1080/15366367.2010.508686 - DOI
-
- Binet A, Simon Th: Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. Annee Psychol. 1905;11:191–244. 10.3406/PSY.1904.3675 - DOI
-
- Boyd A, Dodd B, Choi S: Polytomous Models in Computerized Adaptive Testing.In: Handbook of Polytomous Item Response Theory Models,1st ed. Routledge,2010;28. Reference Source - DOI
Publication types
LinkOut - more resources
Full Text Sources