Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Randomized Controlled Trial
. 2015 Dec;47(4):1136-1147.
doi: 10.3758/s13428-014-0536-1.

Spontaneous facial expression in unscripted social interactions can be measured automatically

Affiliations
Randomized Controlled Trial

Spontaneous facial expression in unscripted social interactions can be measured automatically

Jeffrey M Girard et al. Behav Res Methods. 2015 Dec.

Abstract

Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.

Keywords: Affective computing; Automated coding; FACS; Facial expression.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Examples of video frames with facial landmark tracking
Fig. 2
Fig. 2
Base rates of all the coded facial action units from a subset of the data (n = 56)
Fig. 3
Fig. 3
Automated FACS Coding Pipeline. Example shown is for AU 6+12
Fig. 4
Fig. 4
Mean inter-system reliability for twelve FACS action units

References

    1. Abrantes GA, Pereira F. MPEG-4 facial animation technology: Survey, implementation, and results. IEEE Transactions on Circuits and Systems for Video Technology. 1999;9(2):290–305.
    1. Ambadar Z, Cohn JF, Reed LI. All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of Nonverbal Behavior. 2009;33(1):17–34. - PMC - PubMed
    1. Archinard M, Haynal-Reymond V, Heller M. Doctor's and patients’ facial expressions and suicide reattempt risk assessment. Journal of Psychiatric Research. 2000;34(3):261–262. - PubMed
    1. Bartlett MS, Littlewort G, Frank MG, Lainscsek C, Fasel IR, Movellan JR. Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia. 2006;1(6):22–35.
    1. Bruce V, Young A. In the eye of the beholder: The science of face perception. Oxford University Press; New York: 1998.

Publication types

LinkOut - more resources