Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jun 4;34(6):bhae228.
doi: 10.1093/cercor/bhae228.

Visual periodicity reveals distinct attentional signatures for face and non-face categories

Affiliations

Visual periodicity reveals distinct attentional signatures for face and non-face categories

Genevieve L Quek et al. Cereb Cortex. .

Abstract

Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.

Keywords: EEG; face perception; frequency tagging; object recognition; selective attention.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interest.

Figures

Fig. 1
Fig. 1
A) Examples of category exemplars used in Exp 1 (objects, faces, birds) and Exp 2 (objects, houses, birds). Instances of guitars (additional targets in Exp 2) are given in the object panel. B) Schematic of the “interlaced” sequence in Exp 1. Objects appeared at a rate of 6 Hz, with bird and face exemplars appearing as every fourth and fifth image, respectively. Where these 1.5 and 1.2 Hz frequencies overlapped every 20 images, a face image always took precedence (i.e. effectively skipping a bird presentation), allowing for exactly 48 bird and 48 face instances in each 40 s sequence. Sequence structure in Exp 2 was identical, save that house images replaced the face images.
Fig. 2
Fig. 2
A) The grand averaged amplitude spectrum for Exp 1, shown up to 18 Hz for visualization purposes. Strong and significant responses were evident at the image presentation frequency (i.e. 6 Hz & harmonics), face-selective frequency (1.2 Hz & harmonics), and bird-selective frequency (1.5 Hz & harmonics). B) The common visual response for Exp 1 (i.e. sum of 6, 12, 18, 24, 30, & 36 Hz), shown as a function of task and ROI. C) Scalp topographies for the Exp 1 common visual response in each task condition.
Fig. 3
Fig. 3
Experiment 1 frequency domain results. A) Conditional mean amplitudes at each harmonic frequency included in the quantification of the bird-selective response (left panel) and face-selective response (right panel), averaged across the left/right ROIs (see inset). All error bars are within-subjects standard error. B) The quantified bird- and face-selective responses (i.e. amplitudes summed across the harmonic ranges indicated in A), averaged across the left/right ROIs. C) Indices of enhancement (attend towards—Baseline) and suppression (attend away—Baseline) for the bird- and face-selective responses. Overlaid points are individual participants. *P < 0.05, Bonferroni-corrected, one sample t-test against zero). D) Conditional mean scalp topographies for the bird- (top row) and face-selective responses (bottom row) in Exp 1. Amplitude ranges are fixed across attention condition for each signal type. Individual participant topographies for each condition are given in Supplemental Fig. S1A.
Fig. 4
Fig. 4
Category-selective responses in Exps 1 (A & B) and 2 (C & D), shown as a function of time from stimulus onset for each attention condition. A) the face-selective and B) bird-selective responses in Exp 1, averaged across the left and right OT ROIs. Shaded regions are within-subjects standard error. Colored points below each plot reflect Bayesian evidence at each timepoint for a difference between the baseline (BL) & Attend Towards (AT) conditions and the baseline (BL) & Attend Away (AA) conditions. Cool and warm colors denote evidence for the null and H1, respectively. Group-averaged headplots are given for select timepoints; amplitude ranges are fixed within signal type (i.e. same color bar for all headplots for each signal). C & D) As above, but for the house-selective and bird-selective signals in Exp 2. Category-selective waveforms separated by left and right OT ROIs are given in supplemental Figs. S3 and S4.
Fig. 5
Fig. 5
Experiment 2 frequency domain results. A) Conditional mean amplitudes at each harmonic frequency of the bird-selective response (left panel) and house-selective response (right panel), averaged across the left/right ROIs (see inset). All error bars are within-subjects standard error. B) The quantified bird- and house-selective responses (i.e. amplitudes summed across the harmonic ranges indicated in A) averaged across the left/right ROIs. C) Indices of enhancement and suppression for the bird-selective and house-selective signals, calculated separately using attend-cross as baseline (left panel) and attend-guitar as baseline (right panel). Overlaid points are individual participants. (*P < 0.01, Bonferroni-corrected, one sample t-test against zero). D) Scalp topographies for the quantified bird-selective (top row) and house-selective response (bottom row) in each condition of Exp 2. Individual participant topographies are given in Supplemental Fig. S1B. E) The common visual response shown as a function of task and ROI. F) Scalp topographies for the common visual response in each task condition.

Similar articles

References

    1. Addleman DA, Rajasingh R, Stoermer VS. Attention to object categories: Selection history determines the breadth of attentional tuning during real-world object search. J Vis. 2023:23:5428–5428. - PMC - PubMed
    1. Allport A. Selection for action: some behavioral and neurophysiological considerations of attention and action. Perspectives on perception and action. 1987:15:395–419.
    1. Appelbaum LG, Norcia AM. Attentive and pre-attentive aspects of figural processing. J Vis. 2009:9(11):18.1–18.12. 10.1167/9.11.18. - DOI - PubMed
    1. Baluch F, Itti L. Mechanisms of top-down attention. Trends Neurosci. 2011:34(4):210–224. 10.1016/j.tins.2011.02.003. - DOI - PubMed
    1. Battistoni E, Kaiser D, Hickey C, Peelen MV. Spatial attention follows category-based attention during naturalistic visual search: evidence from MEG decoding. 2018: bioRxiv.390807. - PubMed

Publication types