4.1 Article

LapMentor Metrics Possess Limited Construct Validity

Publisher

LIPPINCOTT WILLIAMS & WILKINS
DOI: 10.1097/SIH.0b013e31816366b9

Keywords

Laparoscopy; Simulation; LapMentor laparoscopic simulator; Virtual reality; Construct validity

Funding

  1. United States Surgical (Norfolk, Connecticut)
  2. Association for Surgical Education Foundation's Center for Excellence in Surgical Education, Research, and Training (Springfield, Illinois)

Ask authors/readers for more resources

Background: Many surgical training programs are introducing virtual-reality laparoscopic simulators into their curriculum. If a surgical simulator will be used to determine when a trainee has reached an expert level of performance, its evaluation metrics must accurately reflect varying levels of skill. The ability of a metric to differentiate novice from expert performance is referred to as construct validity. The present study was undertaken to determine whether the LapMentor's metrics demonstrate construct validity. Methods: Medical students, residents and faculty laparoscopic surgeons (n = 5-14 per group) performed 5 consecutive repetitions of 6 laparoscopic skills tasks: 30 degrees Camera Manipulation, Eye-Hand Coordination, Clipping/Grasping, Cutting, Electrocautery, and Translocation of Objects. The LapMentor measured performance in 4 to 12 parameters per task. Mean performance for each parameter was compared between subject groups for the first and fifth repetitions. Pairwise comparisons among the 3 groups were made by post hoc t-tests with Bonferroni technique. Significance was set at P < 0.05. Results: Of the 6 tasks evaluated, only the Eye-Hand Coordination task (3/12 parameters) and the Clipping and Grasping (1/7 parameters) had expert-level discrimination when performance was compared after completion of 1 repetition. Comparison of the fifth repetition performance (representing the plateau of the learning curves), demonstrated that the parameters Time and Score had expert level discrimination on the Eye-Hand Coordination task, and Time on the Cutting task. The remaining LapMentor tasks evaluated did not exhibit the ability to differentiate level of expertise based on the built-in metrics on either repetition 1 or 5. Conclusions: The majority of the LapMentor tasks' metrics were unable to differentiate between laparoscopic experts and less skilled subjects. Therefore, performance on those tasks may not accurately reflect a subject's true level of ability. Feedback to the manufacturer about these findings may encourage the development of evaluation parameters with greater sensitivity. (Sim Healthcare 3: 16-25, 2008)

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available