Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Dec 13;22(1):861.
doi: 10.1186/s12909-022-03919-1.

eOSCE stations live versus remote evaluation and scores variability

Collaborators, Affiliations

eOSCE stations live versus remote evaluation and scores variability

Donia Bouzid et al. BMC Med Educ. .

Abstract

Background: Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs.

Methods: We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students' performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable.

Results: One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50-70), 60 out of 100 (IQR 54-70), and 53 out of 100 (IQR 45-62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively).

Conclusion: Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs.

Keywords: Global ratings; Interrater reliability; Remote objective structured clinical examination.

PubMed Disclaimer

Conflict of interest statement

None.

References

    1. Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J. 1975;1:447–451. doi: 10.1136/bmj.1.5955.447. - DOI - PMC - PubMed
    1. Gormley GJ, Hodges BD, McNaughton N, Johnston JL. The show must go on? Patients, props and pedagogy in the theatre of the OSCE. Med Educ. 2016;50:1237–1240. doi: 10.1111/medu.13016. - DOI - PubMed
    1. Regehr G, MacRae H, Reznick RK, Szalay D. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993–997. doi: 10.1097/00001888-199809000-00020. - DOI - PubMed
    1. Yeates P, Cope N, Hawarden A, Bradshaw H, McCray G, Homer M. Developing a video-based method to compare and adjust examiner effects in fully nested OSCEs. Med Educ. 2019;53:250–263. doi: 10.1111/medu.13783. - DOI - PMC - PubMed
    1. Tamblyn RM, Klass DJ, Schnabl GK, Kopelow ML. Sources of unreliability and bias in standardized-patient rating. Teach Learn Med. 1991;3:74–85. doi: 10.1080/10401339109539486. - DOI

LinkOut - more resources