Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Mar 31;24(7):738-43.
doi: 10.1016/j.cub.2014.02.009. Epub 2014 Mar 20.

Automatic decoding of facial movements reveals deceptive pain expressions

Affiliations

Automatic decoding of facial movements reveals deceptive pain expressions

Marian Stewart Bartlett et al. Curr Biol. .

Abstract

In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Example of facial action coding
Here, a facial expression of pain is coded in terms of eight component facial actions based on the Facial Action Coding System (FACS).
Figure 2
Figure 2. System Overview
Face video is processed by the computer vision system, CERT, to measure the magnitude of 20 facial actions over time. The CERT output on the top is a sample of real pain, while the sample on the bottom shows the same three actions for faked pain from the same subject. Note that these facial actions are present in both real and faked pain, but their dynamics differ. Expression dynamics were measured with a bank of 8 temporal Gabor filters and expressed in terms of ‘bags of temporal features.’ These measures were passed to a machine learning system (nonlinear support vector machine) to classify real versus faked pain. The classification parameters were learned from the 24 one-minute examples of real and faked pain.
Figure 3
Figure 3. Bags of Temporal Features
Here we illustrate an exemplar of one stimulus as it is processed by each step. A. Sample CERT signals from one subject (Black circles indicate the time point of the face image shown in Figure 2). Three seconds of data are illustrated, but processing is performed on the full 60 seconds of video. B. The CERT signals were filtered by temporal Gabor filters at eight frequency bands. C. Filter outputs for one facial action (brow lower) and one temporal frequency band (the highest frequency). D. Zero crossings are detected, and area under / over the curve calculated. The descriptor consists of histograms of area under the curve for positive regions, and separate histograms for area over the curve for negative regions. (Negative output is where evidence indicates absence of the facial action.) E. Full bag of temporal features for one action (brow lower). Consists of eight pairs of histograms, one per filter.
Figure 4
Figure 4. Contribution of temporal information
Classification performance (A’) is shown for temporal integration window sizes ranging from 10 seconds to 60 seconds. Windows were sliding and then performance was averaged across temporal position. Performance is shown for the 5-feature system. The region above the shaded region is statistically significant at the p < .05 level. Error bars are one standard error of the mean.

References

    1. Darwin C. The Expression of the Emotions in Man and Animals. London: Murray; 1872.
    1. Ekman P. The argument and evidence about universals in facial expressions of emotion. In: Raskin DC, editor. Psychological Methods in Criminal Investigation and Evidence. New York: Springer Publishing Co, Inc.; 1989. pp. 297–332.
    1. Frank M, Ekman P, Friesen W. Behavioral markers and recognizability of the smile of enjoyment. J. Pers. Soc. Psychol. 1993;64:83–93. - PubMed
    1. Rinn WE. The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expression. Psychol. Bull. 1984;95:52–77. - PubMed
    1. Kunz M, Chen JI, Lautenbacher S, Vachon-Presseau E, Rainville P. Cerebral regulation of facial expressions of pain. J. Neurosci. 2011;31:8730–8738. - PMC - PubMed

Publication types