Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Apr;14(4):645-657.
doi: 10.1007/s11548-019-01918-0. Epub 2019 Feb 7.

Wearable technology-based metrics for predicting operator performance during cardiac catheterisation

Affiliations

Wearable technology-based metrics for predicting operator performance during cardiac catheterisation

Jonathan Currie et al. Int J Comput Assist Radiol Surg. 2019 Apr.

Abstract

Introduction: Unobtrusive metrics that can auto-assess performance during clinical procedures are of value. Three approaches to deriving wearable technology-based metrics are explored: (1) eye tracking, (2) psychophysiological measurements [e.g. electrodermal activity (EDA)] and (3) arm and hand movement via accelerometry. We also measure attentional capacity by tasking the operator with an additional task to track an unrelated object during the procedure.

Methods: Two aspects of performance are measured: (1) using eye gaze and psychophysiology metrics and (2) measuring attentional capacity via an additional unrelated task (to monitor a visual stimulus/playing cards). The aim was to identify metrics that can be used to automatically discriminate between levels of performance or at least between novices and experts. The study was conducted using two groups: (1) novice operators and (2) expert operators. Both groups made two attempts at a coronary angiography procedure using a full-physics virtual reality simulator. Participants wore eye tracking glasses and an E4 wearable wristband. Areas of interest were defined to track visual attention on display screens, including: (1) X-ray, (2) vital signs, (3) instruments and (4) the stimulus screen (for measuring attentional capacity).

Results: Experts provided greater dwell time (63% vs 42%, p = 0.03) and fixations (50% vs 34%, p = 0.04) on display screens. They also provided greater dwell time (11% vs 5%, p = 0.006) and fixations (9% vs 4%, p = 0.007) when selecting instruments. The experts' performance for tracking the unrelated object during the visual stimulus task negatively correlated with total errors (r = - 0.95, p = 0.0009). Experts also had a higher standard deviation of EDA (2.52 µS vs 0.89 µS, p = 0.04).

Conclusions: Eye tracking metrics may help discriminate between a novice and expert operator, by showing that experts maintain greater visual attention on the display screens. In addition, the visual stimulus study shows that an unrelated task can measure attentional capacity. Trial registration This work is registered through clinicaltrials.gov, a service of the U.S. National Health Institute, and is identified by the trial reference: NCT02928796.

Keywords: Attentional capacity; Eye tracking; Simulation-based training; Surgical simulation; Wearable technology.

PubMed Disclaimer

Conflict of interest statement

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

Ethical approval for this study was granted across the island of Ireland: (1) Ulster University (ref: FCEEFC 20160630), (2) University College Cork (ref: ECM 4 (g) 09/08/16).

Informed consent

All subjects received informed consent as approved by ethics committee.

Figures

Fig. 1
Fig. 1
Main image: Mentice VIST-Lab simulator, with the four AOIs identified. Bottom right: a participant during procedural performance, wearing eye tracking glasses connected to the portable recording device placed to the left on the simulator table and wearing the Empatica’s E4 wristband on their wrist (hidden)
Fig. 2
Fig. 2
Card acknowledgement % effect on total errors for first attempt. (1) All participants (full dataset), (2) novice only, (3) expert only
Fig. 3
Fig. 3
Card acknowledgement % relationship with total errors for the final attempt. (1) All participants (full dataset) included, (2) a clear outlier (a novice) is removed from dataset, (3) novice only, (4) novice only with outlier removed, (5) expert only
Fig. 4
Fig. 4
Group comparison for transition frequency over all AOIs
Fig. 5
Fig. 5
Group comparison of calculated SD for recorded EDA during both attempts

References

    1. Kohn L, Corrigan J, Donaldson M. To Err is human: building a safer health system. USA: National Academies Press; 2000. - PubMed
    1. Zhang J, Patel VL, Johnson TR. Medical error: is the solution medical or cognitive? J Am Med Inform Assoc. 2002;9(6 Suppl):S75–S77. doi: 10.1197/jamia.M1232. - DOI - PMC - PubMed
    1. Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100(3):363–406. doi: 10.1037/0033-295X.100.3.363. - DOI
    1. Pedowitz RA, Nicandri GT, Angelo RL, Ryu RKN, Gallagher AG. Objective assessment of knot-tying proficiency with the fundamentals of arthroscopic surgery training program workstation and knot tester. Arthroscopy. 2015;31(10):1872–1879. doi: 10.1016/j.arthro.2015.06.021. - DOI - PubMed
    1. Angelo RL, Ryu RKN, Pedowitz RA, Beach W, Burns J, Dodds J, Field L, Getelman M, Hobgood R, McIntyre L, Gallagher AG. A proficiency-based progression training curriculum coupled with a model simulator results in the acquisition of a superior arthroscopic bankart skill set. Arthroscopy. 2015;31(10):1854–1871. doi: 10.1016/j.arthro.2015.07.001. - DOI - PubMed

Associated data

LinkOut - more resources