Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Mar 10;15(3):e0230039.
doi: 10.1371/journal.pone.0230039. eCollection 2020.

Perceived emotional expressions of composite faces

Affiliations

Perceived emotional expressions of composite faces

Markku Kilpeläinen et al. PLoS One. .

Abstract

The eye and mouth regions serve as the primary sources of facial information regarding an individual's emotional state. The aim of this study was to provide a comprehensive assessment of the relative importance of those two information sources in the identification of different emotions. The stimuli were composite facial images, in which different expressions (Neutral, Anger, Disgust, Fear, Happiness, Contempt, and Surprise) were presented in the eyes and the mouth. Participants (21 women, 11 men, mean age 25 years) rated the expressions of 7 congruent and 42 incongruent composite faces by clicking on a point within the valence-arousal emotion space. Eye movements were also monitored. With most incongruent composite images, the perceived emotion corresponded to the expression of either the eye region or the mouth region or an average of those. The happy expression was different. Happy eyes often shifted the perceived emotion towards a slightly negative point in the valence-arousal space, not towards the location associated with a congruent happy expression. The eye-tracking data revealed significant effects of congruency, expressions and interaction on total dwell time. Our data indicate that whether a face that combines features from two emotional expressions leads to a percept based on only one of the expressions (categorical perception) or integration of the two expressions (dimensional perception), or something altogether different, strongly depends upon the expressions involved.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Example of the stimulus display presented to participants at the beginning of a trial.
During each trial one facial image was presented on the left side of the screen and the V-A space and the term list (black background) on the right side of the screen. The participants’ task was to first give a rating within the V-A space (top right) and then to select 1–2 terms (bottom right) that best described the emotional expression of the presented face. The participant had up to 3.5 seconds to view the facial image on left, after which it disappeared. The expression portrayed in the example here represents a happy mouth combined with neutral eyes. The example face image is adapted from Radboud faces database (http://www.socsci.ru.nl:8180/RaFD2/RaFD?p=main). The copyright holder, Radboud faces database, has given a written informed consent to publish this image.
Fig 2
Fig 2. Data from the two measurement methods are in agreement.
A) Examples of the congruent and incongruent stimuli used in the experiment. B) Perceptions of congruent emotional expressions as reflected in the direct V-A-ratings (crosses, where the width indicates the 95% CI over participants) and MDS analysis of terms selected from a list (circles). For presentation purposes, the MDS analysis result was transformed, whilst keeping the structure of the data intact (isotropically scaled and rotated). Our simulations (10 000 randomizations) showed that the probability that such a strong correspondence (as indicated by sum of squares of location differences) between the structures of the rating data and the MDS analysis would occur by chance is very low (p<0.001). C) Examples of changes in the perception when either the mouth (left) or eye (right) region was changed from the congruent expression. The point of the arrow indicates the point to which perception on average shifted in the V-A space. For example, when mouth was changed from happy to disgusted (left), whilst the eyes remained happy, perception shifted quite completely to a point corresponding to the perception of congruent disgust (long blue arrow originating from the magenta cross). Each small dot represents the average of one participant’s ratings in corresponding conditions (magenta: congruent happy, blue: mouth (left) or eyes (right) changed to disgusted).
Fig 3
Fig 3. Shifts in perceived emotions due to incongruence.
A) All shifts caused by changing the expression in the mouth region (left) or the eye region (right). The widths of the crosses indicate the 95% CIs (across participants) for the ratings of congruent expressions (see labels), the colours of the crosses indicate the congruent expression from which the mouth or eyes were changed. The arrows represent the perception shift caused by changing the eye region (left) or the mouth region (right) to that indicated by the arrow colour (see legend). The length of the arrows represents the relative magnitude of the change. An arrow that reaches the vicinity of the black circle at the end of the dashed line indicates that the percept completely followed the changed feature, an arrow that reaches the line perpendicular to the direct (dashed) path towards the centre indicates a percept based equally on both regions. Asterisks indicate shifts to directions which were of considerable magnitude but towards a direction that differs statistically significantly (t(31)>2.9, p<0.05 with Bonferroni correction in all tests, see Tables 1 and 2) from the direction expected based on the two emotions present in the composite faces (the dashed line). B) The unexpected shifts (indicated with asterisks in A), now presented in the V-A space.

Similar articles

Cited by

References

    1. Freiwald W, Duchaine B, Yovel G. Face processing systems: From neurons to real world social perception. Annu Rev Neurosci. 2016. July 8;39:325–46. 10.1146/annurev-neuro-070815-013934 - DOI - PMC - PubMed
    1. Wood A, Rychlowska M, Korb S, Niedenthal P. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition. Trends Cogn Sci. 2016. March 1;20(3):227–40. 10.1016/j.tics.2015.12.010 - DOI - PubMed
    1. Smith ML, Cottrell GW, Gosselin F, Schyns PG. Transmitting and decoding facial expressions. Psychol Sci. 2005. March;16(3):184–9. 10.1111/j.0956-7976.2005.00801.x - DOI - PubMed
    1. Calvo MG, Nummenmaa L. Detection of emotional faces: salient physical features guide effective visual search. J Exp Psychol Gen. 2008. August;137(3):471–94. 10.1037/a0012771 - DOI - PubMed
    1. Calder AJ, Keane J, Young AW, Dean M. Configural information in facial expression perception. J Exp Psychol Hum Percept Perform. 2000;26(2):527–51. 10.1037//0096-1523.26.2.527 - DOI - PubMed

Publication types