Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 Dec 8:2009:1-8.
doi: 10.1109/ACII.2009.5349321.

Automatically Detecting Pain Using Facial Actions

Automatically Detecting Pain Using Facial Actions

Patrick Lucey et al. Int Conf Affect Comput Intell Interact Workshops. .

Abstract

Pain is generally measured by patient self-report, normally via verbal communication. However, if the patient is a child or has limited ability to communicate (i.e. the mute, mentally impaired, or patients having assisted breathing) self-report may not be a viable measurement. In addition, these self-report measures only relate to the maximum pain level experienced during a sequence so a frame-by-frame measure is currently not obtainable. Using image data from patients with rotator-cuff injuries, in this paper we describe an AAM-based automatic system which can detect pain on a frame-by-frame level. We do this two ways: directly (straight from the facial features); and indirectly (through the fusion of individual AU detectors). From our results, we show that the latter method achieves the optimal results as most discriminant features from each AU detector (i.e. shape or appearance) are used.

PubMed Disclaimer

Figures

Figure 1
Figure 1
An example of facial actions associated when a person in is pain. In this example, AU’s 4, 6, 7, 9, 10, 12, 25 and 43.
Figure 2
Figure 2
Example of the output of the AAM tracking and the associated shape and appearance features: (a) the original sequence, (b) the AAM tracked sequence, (c) the normalized shape features (PTS), and (d) normalized appearance using 500 DCT coefficients (APP500).
Figure 3
Figure 3
The output scores from the SVM for the various features used. In these curves, the horizontal red lines denotes the threshold which the scores have to be above for the patient to be deemed to be in pain. The bottom curve is the actual transcribed pain intensity (as described in Section 2.1). Above the curves are the frames which coincide with actions of interest, namely: (a) frame 1, (b) frame 101, (c) frame 125, (d) frame 160 and (e) frame 260.

References

    1. Ashraf A, Lucey S, Cohn J, Chen T, Ambadar Z, Prkachin K, Solomon P, Theobald B-J. The painful face: pain expression recognition using active appearance models. Proceedings of the 9th international conference on Multimodal interfaces; Nagoya, Aichi: ACM; 2007. pp. 9–14.
    1. Ashraf A, Lucey S, Cohn J, Chen T, Prkachin K, Solomon P. The painful face: Pain expression recognition using active appearance models. to appear in Image and Vision Computing. 2009 - PMC - PubMed
    1. Brummer N, du Preez J. Application-independent evaluation of speaker detection. Computer Speech and Language. 2006;20:230–275.
    1. Cohn J, Ambadar Z, Ekman P. Observer-based measurement of facial measurement of facial expression with the facial action coding system. In: Coan A, Allen J, editors. The handbook of emotion elicitation and assessment. Oxford University Press; New York, USA: pp. 203–221.
    1. Cootes T, Edwards G, Taylor C. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001;23(6):681–685.

LinkOut - more resources