Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Feb 1;127(2):504-518.
doi: 10.1152/jn.00300.2021. Epub 2022 Jan 12.

Relative precision of top-down attentional modulations is lower in early visual cortex compared to mid- and high-level visual areas

Affiliations

Relative precision of top-down attentional modulations is lower in early visual cortex compared to mid- and high-level visual areas

Sunyoung Park et al. J Neurophysiol. .

Abstract

Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared with the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used functional MRI (fMRI) and human participants to assess the precision of bottom-up spatial representations evoked by high-contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially specific bottom-up drive. Whereas V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, midlevel areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g., V1 vs. V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.NEW & NOTEWORTHY When the relative precision of purely top-down and bottom-up signals were compared across visual areas, early visual areas like V1 showed higher bottom-up precision compared with top-down precision. In contrast, midlevel areas showed similar levels of top-down and bottom-up precision. This result suggests that the precision of top-down attentional modulations may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas generating and relaying the signals.

Keywords: fMRI; spatial attention; top-down feedback.

PubMed Disclaimer

Conflict of interest statement

No conflicts of interest, financial or otherwise, are declared by the authors.

Figures

None
Graphical abstract
Figure 1.
Figure 1.
A: task procedure of top-down spatial attention task. Each trial started with a brief flicker of the white fixation dot at the center of the screen. After 300 ms, 1 of the 2 central cues indicating the quadrant (Diffuse) or the exact location (Focused) of the upcoming target was shown for 500 ms. The cues validly predicted the target location in all trials. A cue-to-target delay period followed, and the duration of the delay period varied between 2 and 8 s (2 s in catch trials and 6–8 s in noncatch trials; for details, see Top-Down Spatial Attention Task). A uniform flickering noise stimulus that was at the same contrast level as the target for the trial was present during the delay period. After the delay, the target grating was presented for 150 ms in 1 of the 12 possible locations, and the participants responded by button press whether the orientation of the target was closer to a horizontal or a vertical orientation. Placeholders (black circular lines) were present throughout the task, marking the possible target locations arranged on an imaginary circle. After the target offset, the subsequent trial started after an intertrial interval (ITI) of 5–7 s. Prescan training sessions used a slightly modified version of this task (see Prescan Training Session for details). B: task procedure for bottom-up spatial mapping task. In each of the 3-s trials, a wedge-shaped flickering checkerboard was presented in 1 of 24 locations, arranged on an imaginary circle. The 24 wedges altogether tiled the target location areas in the attention task, and in the decoding analyses data from adjacent wedges were combined to match the 12 target locations for the cross-generalization analysis (see Multivariate Pattern Decoding). In this task, participants responded by button press whenever the contrast of the fixation changed. C and D: behavioral performance for the top-down spatial attention task, data combined from the scanning sessions and an independent eye-tracking session (see Supplemental Methods). C: mean behavioral accuracy. Accuracy was higher in the diffuse compared with the focused condition by 2%. D: mean tilt offset. Tilt offsets were higher in the diffuse than in the focused condition. Colored dots represent data from individual participants. Error bars represent ±1 SE.
Figure 2.
Figure 2.
A: decoding accuracy based on functional MRI (fMRI) activation patterns in the bottom-up spatial mapping task (white bars) and the top-down spatial attention task (Diffuse and Focused conditions, light and dark gray bars, respectively). In the bottom-up mapping task, decoding accuracy was highest in V1 and decreased in later visual areas. In the top-down attention task, decoding accuracy was generally higher in the focused than in the diffuse condition. Whereas top-down decoding accuracy was much lower than bottom-up decoding accuracy in V1, accuracy in the bottom-up mapping task and the focused condition was comparable in later areas [e.g., V3AB, V4, intraparietal sulcus (IPS)], leading to an interaction between task type (bottom-up vs. top-down) and visual areas. Filled colored dots represent data from individual participants, and error bars represent ±1 SE. The dashed line indicates chance performance (1/12 or ∼0.083). B: to better visualize the interaction between task type and visual areas, we computed the ratio between bottom-up and top-down decoding accuracies for each region of interest (ROI). To obtain this ratio, the top-down decoding accuracy was divided by the bottom-up decoding accuracy within each ROI, separately for the diffuse and focused conditions. A low ratio score indicates that an ROI had higher decoding accuracy in the bottom-up mapping task, consistent with higher precision of bottom-up representations. A high ratio score indicates that an ROI had higher decoding accuracy in the top-down task, consistent with higher precision of top-down representations. Although V1 showed higher bottom-up precision, later areas showed comparable levels of bottom-up and top-down precision. Colored dots represent data from individual participants. Error bars are ±1 SE.
Figure 3.
Figure 3.
Similar to Fig. 2 but only using data from the top-down attention task to train and test the classifier for the top-down decoding accuracy. A: decoding accuracy based on functional MRI (fMRI) activation patterns in the bottom-up spatial mapping task (white bars) and the top-down spatial attention task (Diffuse and Focused conditions, light and dark gray bars, respectively). For comparison, bottom-up decoding accuracies from Fig. 2A are plotted together. The general pattern of results followed Fig. 2A: decoding accuracy in the top-down attention task was generally higher in the focused than the diffuse condition, albeit to a lesser degree. Comparing across regions of interest (ROIs), although V1 showed much lower top-down decoding accuracy relative to bottom-up decoding accuracy, later areas [e.g., V3AB, V4, intraparietal sulcus (IPS)] showed comparable decoding accuracies across tasks, leading to an interaction between task type and visual areas. Filled colored dots represent data from individual participants, and error bars represent ±1 SE. The dashed line indicates chance performance (1/12 or ∼0.083). B: ratio of decoding accuracies between top-down attention and bottom-up mapping tasks. To obtain this ratio score, the top-down decoding accuracy for each ROI was divided by the bottom-up decoding accuracy in that ROI, separately for the diffuse and focused conditions. A low ratio score indicates that an ROI had higher decoding accuracy in the bottom-up mapping task, consistent with higher bottom-up precision. A high ratio score indicates that an ROI had higher decoding accuracy in the top-down task, consistent with higher top-down precision. Although V1 showed relatively higher bottom-up precision, later areas showed comparable levels of bottom-up and top-down precision, showing a pattern of results similar to Fig. 2B. Colored dots represent data from individual participants. Error bars are ±1 SE.
Figure 4.
Figure 4.
Confusion matrices of classifier predictions for the presented stimulus location in the bottom-up mapping task and the cued location in the top-down attention task. For analysis and visualization purposes, spatial locations were arbitrarily labeled from 1 to 12, 1 being the leftmost position in the first quadrant with numbers increasing in the clockwise direction. Each cell within the matrices was colored based on values within the range of 0–80% for the bottom-up mapping task and 0–50% for the top-down attention task conditions, as indicated in the color bars on right (this was done to make patterns easier to discern). The vertical dashed white lines in the diffuse condition matrices (middle) divide the 12 spatial locations into the 4 cued quadrants. In the bottom-up mapping task (top), classifier predictions were clustered on the diagonal, where the predicted location closely tracked the actual stimulus location. In the focused condition of the top-down attention task (bottom), the diagonal pattern was visible but to a lesser degree than that in the bottom-up mapping task. In the diffuse condition of the top-down attention task (middle), classifier predictions were clustered within the cued quadrant, indicating that the attentional modulation was spread across the whole quadrant, consistent with subjects using the diffuse cue as intended.
Figure 5.
Figure 5.
A: diagonal and off-diagonal regressors used to fit the confusion matrices shown in Fig. 4. Cells colored black were assigned 1’s, and cells colored white were assigned 0’s. B: ratio of beta weights in the bottom-up mapping task and the diffuse and focused conditions from the top-down attention task. The beta weights for the diagonal regressor were divided by the beta weights for the off-diagonal regressor and then plotted on a logarithmic scale. Beta ratios above 0 indicate that the diagonal beta weight was larger than the off-diagonal beta weight, and beta ratios below 0 indicate that the off-diagonal beta weight was larger than the diagonal beta weight. Beta weight ratios in the bottom-up mapping task were all above 0, with the highest value in V1 and gradually decreasing in later areas. Beta weight ratios in the diffuse condition from the top-down attention task were close to 0 across all regions of interest (ROIs). Beta weight ratios in the focused condition from the attention task were above 0 for all ROIs, but compared with the ratios from the bottom-up task the ratios for earlier areas were much smaller whereas ratios for later areas were at a similar level. Colored dots represent data from individual participants. Error bars are ±1 SE. IPS, intraparietal sulcus.

References

    1. Egeth HE, Yantis S. Visual attention: control, representation, and time course. Annu Rev Psychol 48: 269–297, 1997. doi:10.1146/annurev.psych.48.1.269. - DOI - PubMed
    1. Giordano AM, McElree B, Carrasco M. On the automaticity and flexibility of covert attention: a speed-accuracy trade-off analysis. J Vis 9: 30.1–3010, 2009. doi:10.1167/9.3.30. - DOI - PMC - PubMed
    1. Jonides J. Further toward a model of the mind’s eye’s movement. Bull Psychon Soc 21: 247–250, 1983. doi:10.3758/BF03334699. - DOI
    1. Carrasco M. Visual attention: the past 25 years. Vision Res 51: 1484–1525, 2011. doi:10.1016/j.visres.2011.04.012. - DOI - PMC - PubMed
    1. Serences JT, Kastner S. A Multi-Level Account of Selective Attention. Oxford, UK: Oxford University Press, 2014.

Publication types

LinkOut - more resources