Evaluating the Reliability and Validity Evidence of the RIME (Reporter-Interpreter-Manager-Educator) Framework for Summative Assessments Across Clerkships
- PMID: 33116058
- DOI: 10.1097/ACM.0000000000003811
Evaluating the Reliability and Validity Evidence of the RIME (Reporter-Interpreter-Manager-Educator) Framework for Summative Assessments Across Clerkships
Abstract
Purpose: The ability of medical schools to accurately and reliably assess medical student clinical performance is paramount. The RIME (reporter-interpreter-manager-educator) schema was originally developed as a synthetic and intuitive assessment framework for internal medicine clerkships. Validity evidence of this framework has not been rigorously evaluated outside of internal medicine. This study examined factors contributing to variability in RIME assessment scores using generalizability theory and decision studies across multiple clerkships, thereby contributing to its internal structure validity evidence.
Method: Data were collected from RIME-based summative clerkship assessments during 2018-2019 at Virginia Commonwealth University. Generalizability theory was used to explore variance attributed to different facets through a series of unbalanced random-effects models by clerkship. For all analyses, decision (D-) studies were conducted to estimate the effects of increasing the number of assessments.
Results: From 231 students, 6,915 observations were analyzed. Interpreter was the most common RIME designation (44.5%-46.8%) across all clerkships. Variability attributable to students ranged from 16.7% in neurology to 25.4% in surgery. D-studies showed the number of assessments needed to achieve an acceptable reliability (0.7) ranged from 7 in pediatrics and surgery to 11 in internal medicine and 12 in neurology. However, depending on the clerkship each student received between 3 and 8 assessments.
Conclusions: This study conducted generalizability- and D-studies to examine the internal structure validity evidence of RIME clinical performance assessments across clinical clerkships. Substantial proportion of variance in RIME assessment scores was attributable to the rater, with less attributed to the student. However, the proportion of variance attributed to the student was greater than what has been demonstrated in other generalizability studies of summative clinical assessments. Overall, these findings support the use of RIME as a framework for assessment across clerkships and demonstrate the number of assessments required to obtain sufficient reliability.
Copyright © 2020 by the Association of American Medical Colleges.
Comment in
-
O-RI-M: Reporting to Include Data Interpretation.Acad Med. 2021 Aug 1;96(8):1079-1080. doi: 10.1097/ACM.0000000000004136. Epub 2021 Jul 27. Acad Med. 2021. PMID: 36047866 No abstract available.
-
Reporter-Interpreter-Manager-Educator: An Observational Framework, Not a Grading Framework.Acad Med. 2023 Jan 1;98(1):9-10. doi: 10.1097/ACM.0000000000005017. Epub 2022 Dec 22. Acad Med. 2023. PMID: 36576762 No abstract available.
References
-
- Lockyer J, Carraccio C, Chan M-K, et al. Core principles of assessment in competency-based medical education. Med Teach. 2017;39:609–616.
-
- Hawkins RE, Welcher CM, Holmboe ES, et al. Implementation of competency-based medical education: Are we addressing the concerns and challenges? Med Educ. 2015;49:1086–1102.
-
- Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74:1203–1207.
-
- Pangaro LN. Investing in descriptive evaluation: A vision for the future of assessment. Med Teach. 2000;22:478–481.
-
- Hemmer PA, Papp KK, Mechaber AJ, Durning SJ. Evaluation, grading, and use of the RIME vocabulary on internal medicine clerkships: Results of a national survey and comparison to other clinical clerkships. Teach Learn Med. 2008;20:118–126.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
