Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jul 13;14(1):16193.
doi: 10.1038/s41598-024-66619-4.

Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition

Affiliations

Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition

Anita Paparelli et al. Sci Rep. .

Abstract

Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.

Keywords: Individual differences – facial expressions of emotion – eye-movements.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Fixation patterns (n = 3) in FER discovered through EMHMM clustering. Each representative HMM included three different states (k = 3), depicted by the sROI 1 (red), 2 (green) and 3 (blue). Please note that, as sROI 1 and 3 in Group 1, and sROI 2 and 3 in Group 2, were duplicates, an ellipse displacement of 1 pixel to the right of the figure was made for better visualization. Third row shows priors and transitions matrices. Priors represent the probability of the first fixation to belong to each state. Gaze transition probabilities between the three different states indicate the probabilities of observing a particular transition from one state to another, or to remain in the same state.
Figure 2
Figure 2
Heat maps illustrating the fixation bias of Group 1, 2 and 3, with their associated statistical difference. Significant areas are demarked by a black line. Yellow and blue clusters represent the respective groups’ differences.
Figure 3
Figure 3
Distribution of 72 participants to their assigned prevalent groups of fixation strategy and their general level of consistency across the twelve eye-movement datasets. One subject was not included as their EM datasets were split over different groups without any prevailing over the others. Please note that 60% of the participants (“consistent observers”) use the same strategy for at least 10 out of 12 conditions.
Figure 4
Figure 4
(a) Percentages of observers employing the same strategy (1, 2 or 3) for static and dynamic modalities for each expression. (b) Distribution of observers employing the same strategy (1, 2 or 3) for static and dynamic modalities, from 1 to 6 FEEs.
Figure 5
Figure 5
Distribution of EM datasets for all expressions within groups, respectively in static and dynamic modalities. Note that the distribution of expressions between groups is not represented here.
Figure 6
Figure 6
Observers’ FER accuracy in each of the twelve conditions across the three strategies. * p < 0.0017. Error bars represent 95% confidence intervals of the median number of correct responses for each group and condition.
Figure 7
Figure 7
Illustration of the six static facial expressions of emotion for one female identity.
Figure 8
Figure 8
A schematic representation of the procedure. Each trial started with a central fixation cross followed by a facial expression presented for 1 s at a random location on the screen (e.g., top left). After each trial, participants provided their answer using labeled keys on a keyboard. The answer screen in French reads as follows: “press p for fear, c for anger, d for disgust, j for happiness, t for sadness, s for surprise, and I for ‘I don’t know’”.

Similar articles

Cited by

References

    1. Ekman P, Friesen WV. Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Ishk; 1975.
    1. Izard, C. E. The face of Emotion (Appleton-Century-Crofts, 1971).
    1. Darwin C. The Expression of the Emotions in Man and Animals. John Murray; 1872.
    1. Yitzhak N, Pertzov Y, Aviezer H. The elusive link between eye-movement patterns and facial expression recognition. Soc. Personal Psychol. Compass. 2021 doi: 10.1111/spc3.12621. - DOI
    1. White D, Burton AM. Individual differences and the multidimensional nature of face perception. Nat. Rev. Psychol. 2022;1:287. doi: 10.1038/s44159-022-00041-3. - DOI

LinkOut - more resources