Evaluation of different approaches to define expert benchmark scores for new robotic training simulators based on the Medtronic HUGO™ RAS surgical robot experience
- PMID: 38451376
- DOI: 10.1007/s11701-024-01868-z
Evaluation of different approaches to define expert benchmark scores for new robotic training simulators based on the Medtronic HUGO™ RAS surgical robot experience
Abstract
New robot-assisted surgery platforms being developed will be required to have proficiency-based simulation training available. Scoring methodologies and performance feedback for trainees are currently not consistent across all robotic simulator platforms. Also, there are virtually no prior publications on how VR simulation passing benchmarks have been established. This paper compares methods evaluated to determine the proficiency-based scoring thresholds (a.k.a. benchmarks) for the new Medtronic Hugo™ RAS robotic simulator. Nine experienced robotic surgeons from multiple disciplines performed the 49 skills exercises 5 times each. The data were analyzed in 3 different ways: (1) include all data collected, (2) exclude first sessions, (3) exclude outliers. Eliminating the first session discounts becoming familiar with the exercise. Discounting outliers allows removal of potentially erroneous data that may be due to technical issues, unexpected distractions, etc. Outliers were identified using a common statistical technique involving the interquartile range of the data. Using each method above, mean and standard deviations were calculated, and the benchmark was set at a value of 1 standard deviation above the mean. In comparison to including all the data, when outliers are excluded, fewer data points are removed than just excluding first sessions, and the metric benchmarks are made more difficult by an average of 11%. When first sessions are excluded, the metric benchmarks are made easier by an average of about 2%. In comparison with benchmarks calculated using all data points, excluding outliers resulted in the biggest change making the benchmarks more challenging. We determined that this method provided the best representation of the data. These benchmarks should be validated with future clinical training studies.
Keywords: Benchmarking simulation passing scores; Proficiency-based training; Robotic surgery; Surgical education; Virtual simulation.
© 2024. The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
References
-
- Thinggaard E, Bjerrum F, Strandbygaard J, Gögenur I, Konge L (2016) Ensuring competency of novice laparoscopic surgeons-exploring standard setting methods and their consequences. J Surg Educ 73(6):986–991. https://doi.org/10.1016/j.jsurg.2016.05.008 . (Epub 2016 Jun 17) - DOI - PubMed
-
- Raison N, Ahmed K, Fossati N, Buffi N, Mottrie A, Dasgupta P, Van Der Poel H (2017) Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation. BJU Int 119(5):804–811. https://doi.org/10.1111/bju.13710 . (Epub 2016 Dec 9) - DOI - PubMed
MeSH terms
LinkOut - more resources
Full Text Sources