Comparison of formula and number-right scoring in undergraduate medical training: a Rasch model analysis
- PMID: 29121888
- PMCID: PMC5679154
- DOI: 10.1186/s12909-017-1051-8
Comparison of formula and number-right scoring in undergraduate medical training: a Rasch model analysis
Abstract
Background: Progress testing is an assessment tool used to periodically assess all students at the end-of-curriculum level. Because students cannot know everything, it is important that they recognize their lack of knowledge. For that reason, the formula-scoring method has usually been used. However, where partial knowledge needs to be taken into account, the number-right scoring method is used. Research comparing both methods has yielded conflicting results. As far as we know, in all these studies, Classical Test Theory or Generalizability Theory was used to analyze the data. In contrast to these studies, we will explore the use of the Rasch model to compare both methods.
Methods: A 2 × 2 crossover design was used in a study where 298 students from four medical schools participated. A sample of 200 previously used questions from the progress tests was selected. The data were analyzed using the Rasch model, which provides fit parameters, reliability coefficients, and response option analysis.
Results: The fit parameters were in the optimal interval ranging from 0.50 to 1.50, and the means were around 1.00. The person and item reliability coefficients were higher in the number-right condition than in the formula-scoring condition. The response option analysis showed that the majority of dysfunctional items emerged in the formula-scoring condition.
Conclusions: The findings of this study support the use of number-right scoring over formula scoring. Rasch model analyses showed that tests with number-right scoring have better psychometric properties than formula scoring. However, choosing the appropriate scoring method should depend not only on psychometric properties but also on self-directed test-taking strategies and metacognitive skills.
Keywords: Assessment; Construct-irrelevant variance; Formula scoring; Multiple choice questions; Number-right scoring; Rasch model; Reliability; Validity.
Conflict of interest statement
Ethics approval and consent to participate
The data were collected for another study at a time when there was no formal ethical approval process for such studies, and ethical approval was not sought. At the moment, there is an ethical approval committee, but a reanalysis of historical data is automatically ruled exempt. Our work was carried out in accordance with the Declaration of Helsinki and the privacy policy of the University of Groningen. Before the analysis, all data were anonymized and handled with confidentiality.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures


Similar articles
-
The effect of a 'don't know' option on test scores: number-right and formula scoring compared.Med Educ. 1999 Apr;33(4):267-75. doi: 10.1046/j.1365-2923.1999.00292.x. Med Educ. 1999. PMID: 10336757
-
Evidence-based decision about test scoring rules in clinical anatomy multiple-choice examinations.Anat Sci Educ. 2015 May-Jun;8(3):242-8. doi: 10.1002/ase.1478. Epub 2014 Jul 22. Anat Sci Educ. 2015. PMID: 25053378
-
The don't know option in progress testing.Adv Health Sci Educ Theory Pract. 2015 Dec;20(5):1325-38. doi: 10.1007/s10459-015-9604-2. Epub 2015 Apr 26. Adv Health Sci Educ Theory Pract. 2015. PMID: 25912621 Free PMC article. Clinical Trial.
-
Improving the evaluation of therapeutic interventions in multiple sclerosis: the role of new psychometric methods.Health Technol Assess. 2009 Feb;13(12):iii, ix-x, 1-177. doi: 10.3310/hta13120. Health Technol Assess. 2009. PMID: 19216837 Review.
-
A primer on classical test theory and item response theory for assessments in medical education.Med Educ. 2010 Jan;44(1):109-17. doi: 10.1111/j.1365-2923.2009.03425.x. Med Educ. 2010. PMID: 20078762 Review.
Cited by
-
Investigating possible causes of bias in a progress test translation: an one-edged sword.Korean J Med Educ. 2019 Sep;31(3):193-204. doi: 10.3946/kjme.2019.130. Epub 2019 Aug 26. Korean J Med Educ. 2019. PMID: 31455049 Free PMC article.
References
-
- De Champlain AF, Cuddy MM, Scoles PV, Brown M, Swanson DB, Holtzman K, et al. Progress testing in clinical science education: results of a pilot project between the National Board of medical examiners and a US medical school. Med Teach. 2010;32:503–508. doi: 10.3109/01421590903514655. - DOI - PubMed
-
- Lord FM. Formula scoring and number-right scoring. J Educ Meas. 1975;12(1):7–11. doi: 10.1111/j.1745-3984.1975.tb01003.x. - DOI
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
Research Materials