Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Oct:2020:156-164.
doi: 10.1145/3395035.3425190.

Enforcing Multilabel Consistency for Automatic Spatio-Temporal Assessment of Shoulder Pain Intensity

Affiliations

Enforcing Multilabel Consistency for Automatic Spatio-Temporal Assessment of Shoulder Pain Intensity

Diyala Erekat et al. Proc ACM Int Conf Multimodal Interact. 2020 Oct.

Abstract

The standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this study, we propose a video based approach for the automatic measurement of self-reported pain and the observer pain intensity, respectively. To this end, we explore the added value of three self-reported pain scales, i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the Affective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) rating for a reliable assessment of pain intensity from facial expression. Using a spatio-temporal Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) architecture, we propose to jointly minimize the mean absolute error of pain scores estimation for each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Our results show that enforcing the consistency between different self-reported pain intensity scores collected using different pain scales enhances the quality of predictions and improve the state of the art in automatic self-reported pain estimation. The obtained results suggest that automatic assessment of self-reported pain intensity from videos is feasible, and could be used as a complementary instrument to unburden caregivers, specially for vulnerable populations that need constant monitoring.

Keywords: Convolutional Neural Network; Dynamics; Facial Expression; Observer Pain Intensity; Pain; Recurrent Neural Network; Visual Analogue Scale.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
The graphical representation of our end-to-end pain estimation model. (a) The CNN model trained to learn frame-by-frame spatial features (4096D per frame) and (b) The 2-layer GRU model trained to learn per-video temporal dynamics of facial features. (c) The multivariate regression model to estimate pain intensity scores consistent with the VAS, SEN, AFF, and OPI independently and in combination [19].
Figure 2:
Figure 2:
Face registration. (a) Original image. (b) Tracked facial landmarks. (c) Triangulation. (d) Normalized face [19].
Figure 3:
Figure 3:
The UNBC-McMaster Pain Archive distribution per participant for (a) the number of videos (b) the mean duration (with standard deviation) of the videos.
Figure 4:
Figure 4:
Distribution of pain intensity scores for (a) VAS and (b) OPI in the UNBC-McMaster Pain Archive.
Figure 5:
Figure 5:
Distribution of the VAS MAEs per participant using all four scales in training with the consistency term compared to a single label.
Figure 6:
Figure 6:
Distribution of the VAS MAEs per pain intensity level.
Figure 7:
Figure 7:
Distribution of the OPI MAEs per pain intensity level.
Figure 8:
Figure 8:
Accumulative VAS scores for different participants in the UNBC-McMaster Pain Archive.

References

    1. Ashraf Ahmed Bilal, Simon Lucey, Cohn Jeffrey F, Chen Tsuhan, Ambadar Zara, Prkachin Kenneth M, and Solomon Patricia E. 2009. The painful face–pain expression recognition using active appearance models. Image and vision computing 27, 12 (2009), 1788–1796. - PMC - PubMed
    1. Bartlett Marian Stewart, Littlewort Gwen C, Frank Mark G, and Lee Kang. 2014. Automatic decoding of facial movements reveals deceptive pain expressions. Current Biology 24, 7 (2014), 738–743. - PMC - PubMed
    1. Chen Zhanli, Ansari Rashid, and Wilkie Diana J. 2012. Automated detection of pain from facial expressions: a rule-based approach using AAM. In Proceedings of SPIE – the International Society of Optical Engineering, Medical Imaging 2012: Image Processing. San Diego, CA, 1–17. - PMC - PubMed
    1. Chung Junyoung, Gülçehre Çaglar, Cho KyungHyun, and Bengio Yoshua. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. CoRR abs/1412.3555 (2014).
    1. Cohen Jacob. 1988. Set correlation and contingency tables. Applied psychological measurement 12, 4 (1988), 425–434.

LinkOut - more resources