Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Clinical Trial
. 2018 Aug 20;13(8):e0202414.
doi: 10.1371/journal.pone.0202414. eCollection 2018.

Auditory spatial attention is encoded in a retinotopic reference frame across eye-movements

Affiliations
Clinical Trial

Auditory spatial attention is encoded in a retinotopic reference frame across eye-movements

Martijn Jan Schut et al. PLoS One. .

Abstract

The retinal location of visual information changes each time we move our eyes. Although it is now known that visual information is remapped in retinotopic coordinates across eye-movements (saccades), it is currently unclear how head-centered auditory information is remapped across saccades. Keeping track of the location of a sound source in retinotopic coordinates requires a rapid multi-modal reference frame transformation when making saccades. To reveal this reference frame transformation, we designed an experiment where participants attended an auditory or visual cue and executed a saccade. After the saccade had landed, an auditory or visual target could be presented either at the prior retinotopic location or at an uncued location. We observed that both auditory and visual targets presented at prior retinotopic locations were reacted to faster than targets at other locations. In a second experiment, we observed that spatial attention pointers obtained via audition are available in retinotopic coordinates immediately after an eye-movement is made. In a third experiment, we found evidence for an asymmetric cross-modal facilitation of information that is presented at the retinotopic location. In line with prior single cell recording studies, this study provides the first behavioral evidence for immediate auditory and cross-modal transsaccadic updating of spatial attention. These results indicate that our brain has efficient solutions for solving the challenges in localizing sensory input that arise in a dynamic context.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Experimental procedure for Experiment 1 and 2.
A) An illustration of the set-up. Stimuli could appear on two axes on the screen. Fixation targets were presented on the bottom axis, cues and probes on the upper axis. B) A trial in the matching task. White noise was presented for 200 ms, participants clicked where they heard the sound originating from. C) Experimental procedure during the auditory and visual blocks. The discrimination task portion of the trial is shown in further detail on the right. There were different probe locations and probe delays. D) An example of congruent and incongruent probe and memory locations with respect to saccade direction. Square = visual cue location, diagonal line = visual probe location.
Fig 2
Fig 2. Sigmoid fit of pointing responses to auditory locations of a single participant in the matching task.
The black dots show the participant’s localization response to a panned white-noise stimulus. The blue line shows the sigmoid fit to the responses. The locations of the probes used in the auditory block are superimposed on the right side and are connected via a black line. Note that the illustration of the screen is rotated by 90 degrees here, with respect to Fig 1, to align the stimulus location with the vertical axis in the graph.
Fig 3
Fig 3. Results from the linear mixed effects models in Experiment 1 and Experiment 2.
The green line represents the fit of the linear mixed model of reaction times to probes shown at the neutral location, all other lines are drawn relative to the neutral condition. The lines represented the fits from the linear mixed models, the points indicate the binned average data, after correcting for online saccade detection. In both the visual and auditory experimental block, probes at the location of the retinotopic trace are reacted to significantly faster. This facilitation decreases with longer delays between saccade offset and probe onset, indicating that the retinotopic trace extinguishes over time. Shaded regions represent bootstrapped 95% CI’s. To reduce visual overlap, the orange and blue line have been offset slightly in the horizontal direction.
Fig 4
Fig 4. Results from the linear mixed effects model in Experiment 3.
The green line represents reaction times to probes shown at the neutral location, all other lines are drawn relative to the neutral condition. The lines represented the fits from the linear mixed models, the points indicate the binned average data, after correcting for online saccade detection. In both the visual and auditory experimental block, probes at the location of the retinotopic trace are reacted to significantly faster. Shaded regions represent bootstrapped 95% CI’s. To reduce visual overlap, the orange and blue line have been offset slightly in the horizontal direction.

Similar articles

Cited by

References

    1. Baseler HA, Morland AB, Wandell BA. Topographic organization of human visual areas in the absence of input from primary cortex. J Neurosci. 1999;19: 2619–2627. - PMC - PubMed
    1. Bolognini N, Leo F, Passamonti C, Stein BE, Làdavas E. Multisensory-mediated auditory localization. Perception. 2007;36: 1477–1485. 10.1068/p5846 - DOI - PubMed
    1. Stein BE, Meredith MA. The merging of the senses. The MIT Press; 1993.
    1. Calvert G, Spence C, Stein BE. The handbook of multisensory processes. MIT press; 2004.
    1. Jeffress LA. A place theory of sound localization. J Comp Physiol Psychol. 1948;41: 35–39. 10.1037/h0061495 - DOI - PubMed

Publication types