Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Aug 3;20(8):1.
doi: 10.1167/jov.20.8.1.

Modality-specific and multisensory mechanisms of spatial attention and expectation

Affiliations

Modality-specific and multisensory mechanisms of spatial attention and expectation

Arianna Zuanazzi et al. J Vis. .

Abstract

In our natural environment, the brain needs to combine signals from multiple sensory modalities into a coherent percept. Whereas spatial attention guides perceptual decisions by prioritizing processing of signals that are task-relevant, spatial expectations encode the probability of signals over space. Previous studies have shown that behavioral effects of spatial attention generalize across sensory modalities. However, because they manipulated spatial attention as signal probability over space, these studies could not dissociate attention and expectation or assess their interaction. In two experiments, we orthogonally manipulated spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) selectively in one sensory modality (i.e., primary modality) (experiment 1: audition, experiment 2: vision) and assessed their effects on primary and secondary sensory modalities in which attention and expectation were held constant. Our results show behavioral effects of spatial attention that are comparable for audition and vision as primary modalities; however, signal probabilities were learned more slowly in audition, so that spatial expectations were formed later in audition than vision. Critically, when these differences in learning between audition and vision were accounted for, both spatial attention and expectation affected responses more strongly in the primary modality in which they were manipulated and generalized to the secondary modality only in an attenuated fashion. Collectively, our results suggest that both spatial attention and expectation rely on modality-specific and multisensory mechanisms.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Design and example trials of experiment 1 (audition to vision). A. Experiment 1: auditory spatial attention and expectation (i.e., signal probability) were manipulated in a 2 (auditory modality—dark orange, vs. visual modality—light blue) × 2 (attended hemifield vs. unattended hemifield) × 2 (expected hemifield vs. unexpected hemifield) factorial design. For illustration purposes, stimulus locations (left/right) were collapsed. Presence versus absence of response requirement is indicated by the hand symbol, spatial signal probability manipulation is indicated by the %. B. Experiment 1: example of two trials in a session where auditory stimuli were presented with a probability of 0.7 in the left hemifield and 0.3 in the right hemifield. At the beginning of each run (i.e., 80 trials), a cue informed participants whether to attend and respond to auditory signals selectively in their left or right hemifield throughout the entire run. On each trial participants were presented with an auditory or visual stimulus (100 ms duration) either in their left or right hemifield. They were instructed to respond to auditory stimuli only in the attended hemifield and to all visual stimuli irrespective of the hemifield as fast and accurately as possible with their index finger. The response window was limited to 1500 ms. Participants were not explicitly informed that auditory signals were more likely to appear in one of the two hemifields. Instead, spatial expectation was implicitly learned within a session (i.e., day). C. Experiment 1: number of auditory (dark orange) and visual (light blue) trials in the 2 (attended vs. unattended hemifield) × 2 (expected vs. unexpected hemifield) design (pooling over left/right stimulus location). Presence versus absence of response requirement is indicated by the hand symbol. The fraction of the area indicated by the “Response” hand symbol pooled over the two bars of one particular run type (e.g., run type A) represents the response related expectation (i.e., general response probability: the overall probability that a response is required on a particular trial); general response probability is greater for run type A (85%), where attention and expectation are congruent, than for run type B (65%), where attention and expectation are incongruent, as indicated in D. Note: Design and procedure of experiment 2 were comparable to those of experiment 1, with the only difference that vision was the primary modality and audition was the secondary modality. In other words, in experiment 2 attention and expectation were manipulated selectively in vision.
Figure 2.
Figure 2.
Behavioral results of experiment 1 and 2. Bar plots represent across subjects’ mean (±SEM) RT for each of the six conditions with response requirements for experiment 1 (primary modality: audition; secondary modality: vision, (A) and 2 (primary modality: vision; secondary modality: audition, (B), pooling over left/right stimulus location. Overall slower RT are observed for runs type B than runs type A, which reflects differences in general response probability (see Figure 1D) C. Bar plots represent across subjects’ mean (±SEM) ΔRT for effects of spatial expectation (attended unexpected − attended expected hemifield) in the primary (dark bars) and secondary modalities (light bars) for experiment 1 and 2. D. Effects of response probability (attended unexpected − attended expected hemifield) over time (i.e., first and second half: bars; consecutive sets of two attention runs: circles) for audition and vision as primary (dark bars) or secondary (light bars) modality. Brackets and stars indicate significance of main effects and interactions. *P < 0.05; ** P < 0.01; *** P < 0.001. Audition: orange; vision: blue.

References

    1. Aller M., Giani A., Conrad V., Watanabe M., & Noppeney U. (2015). A spatially collocated sound thrusts a flash into awareness. Frontiers in Integrative Neuroscience, 9 (February), 1–8. - PMC - PubMed
    1. Aller M., & Noppeney U. (2019). To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference. PLoS Biology, 17, e3000210. - PMC - PubMed
    1. Batson M. A., Watanabe T., Seitz A. R., & Beer A. L. (2011). Spatial shifts of audio-visual interactions by perceptual learning are specific to the trained orientation and eye. Seeing and Perceiving, 24(6), 579–594. - PMC - PubMed
    1. Beck M. R., Hong S. L., van Lamsweerde A. E., & Ericson J. M. (2014). The effects of incidentally learned temporal and spatial predictability on response times and visual fixations during target detection and discrimination. PloS One, 9(4), e94539. - PMC - PubMed
    1. Bestmann S., Harrison L. M., Blankenburg F., Mars R. B., Haggard P., Friston K. J., & Rothwell J. C. (2008). Influence of uncertainty and surprise on human corticospinal excitability during preparation for action. Current Biology, 18(10), 775–780. - PMC - PubMed

Publication types