Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2008 Spring;3(1):16-25.
doi: 10.1097/SIH.0b013e31816366b9.

LapMentor metrics possess limited construct validity

Affiliations

LapMentor metrics possess limited construct validity

Pamela B Andreatta et al. Simul Healthc. 2008 Spring.

Abstract

Background: Many surgical training programs are introducing virtual-reality laparoscopic simulators into their curriculum. If a surgical simulator will be used to determine when a trainee has reached an "expert" level of performance, its evaluation metrics must accurately reflect varying levels of skill. The ability of a metric to differentiate novice from expert performance is referred to as construct validity. The present study was undertaken to determine whether the LapMentor's metrics demonstrate construct validity.

Methods: Medical students, residents and faculty laparoscopic surgeons (n = 5-14 per group) performed 5 consecutive repetitions of 6 laparoscopic skills tasks: 30 degrees Camera Manipulation, Eye-Hand Coordination, Clipping/Grasping, Cutting, Electrocautery, and Translocation of Objects. The LapMentor measured performance in 4 to 12 parameters per task. Mean performance for each parameter was compared between subject groups for the first and fifth repetitions. Pairwise comparisons among the 3 groups were made by post hoc t-tests with Bonferroni technique. Significance was set at P < 0.05.

Results: Of the 6 tasks evaluated, only the Eye-Hand Coordination task (3/12 parameters) and the Clipping and Grasping (1/7 parameters) had expert-level discrimination when performance was compared after completion of 1 repetition. Comparison of the fifth repetition performance (representing the plateau of the learning curves), demonstrated that the parameters Time and Score had expert level discrimination on the Eye-Hand Coordination task, and Time on the Cutting task. The remaining LapMentor tasks evaluated did not exhibit the ability to differentiate level of expertise based on the built-in metrics on either repetition 1 or 5.

Conclusions: The majority of the LapMentor tasks' metrics were unable to differentiate between laparoscopic experts and less skilled subjects. Therefore, performance on those tasks may not accurately reflect a subject's true level of ability. Feedback to the manufacturer about these findings may encourage the development of evaluation parameters with greater sensitivity.

PubMed Disclaimer

Publication types

LinkOut - more resources