Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
- PMID: 33640326
- PMCID: PMC8217126
- DOI: 10.1016/j.jsurg.2021.02.004
Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery
Abstract
Objective: To test whether crowdsourced lay raters can accurately assess cataract surgical skills.
Design: Two-armed study: independent cross-sectional and longitudinal cohorts.
Setting: Washington University Department of Ophthalmology.
Participants and methods: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores.
Results: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p < 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p < 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = -0.855, p < 0.0001) and expert score (r = -0.927, p < 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r2 = 0.92) and demonstrated excellent predictive modeling when applied to the independent longitudinal dataset (r2 = 0.80). A group of student raters who had edited the cataract videos also graded them, producing scores that more closely approximated experts than the crowd.
Conclusions: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency.
Keywords: Crowdsourcing; cataract surgery; phacoemulsification; surgical assessment; surgical competence.
Copyright © 2021 The Author(s). Published by Elsevier Inc. All rights reserved.
Conflict of interest statement
Declarations of interest: none relevant to this study.
Figures
Comment in
-
Commentary on 'Crowd-sourced Assessment of Surgical Skill Proficiency in Cataract Surgery'.J Surg Educ. 2021 Jul-Aug;78(4):1089-1090. doi: 10.1016/j.jsurg.2021.03.001. Epub 2021 Mar 23. J Surg Educ. 2021. PMID: 33766542 No abstract available.
-
Regarding "Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery".J Surg Educ. 2021 Jul-Aug;78(4):1073-1074. doi: 10.1016/j.jsurg.2021.03.009. Epub 2021 Apr 8. J Surg Educ. 2021. PMID: 33840630 No abstract available.
References
-
- O’Day DM. Assessing surgical competence in ophthalmology training programs. Arch Ophthalmol. 2007;125:395–396. - PubMed
-
- Gedde SJ, Volpe NJ, Feuer WJ, Binenbaum G. Ophthalmology resident surgical competence: a survey of program directors. Ophthalmology. 2020;127:1123–1125. Epub Feb 20. - PubMed
-
- Cremers SL, Lora AN, Ferrufino-Ponce ZK. Global Rating Assessment of Skills in Intraocular Surgery (GRASIS). Ophthalmology. 2005;112:1655–1660. - PubMed
-
- Saleh GM, Gauba V, Mitra A, et al. Objective structured assessment of cataract surgical skill. Arch Ophthalmol. 2007;125:363–366. - PubMed
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
Medical
