Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul 1;126(1):82-94.
doi: 10.1152/jn.00385.2020. Epub 2021 Apr 14.

Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas

Affiliations

Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas

Valeria C Caruso et al. J Neurophysiol. .

Abstract

Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.

Keywords: MIP); coordinate transformations; frontal eye field (FEF); intraparietal cortex (LIP; multisensory; superior colliculus (SC).

PubMed Disclaimer

Conflict of interest statement

No conflicts of interest, financial or otherwise, are declared by the authors.

Figures

None
Graphical abstract
Figure 1.
Figure 1.
Rationale of the study: the brain areas, stimuli, task, and quantification of reference frame. A: anatomical connections and auditory inputs of M/LIP, FEF, and SC. FEF and M/LIP receive auditory inputs from the primary auditory cortex (A1). The SC receives auditory inputs from the inferior colliculus (IC), auditory cortex, M/LIP, and FEF. B: locations of stimuli and initial fixations. All visual stimuli were green lights: the different colors of the fixation lights in the schematics serve to distinguish tuning curves constructed from different initial fixations in the following graphs. C: task: each trial starts with the appearance of a light which the monkey is required to fixate. After a variable delay of 900–1,200 ms, a visual or auditory target is presented. After a second variable delay of 600–900 ms, the fixation light disappears and the monkey reports the location of the target by saccading to it. D–G: schematics of the relative alignment of the tuning curves from three initial fixation positions plotted in head- and eye-centered coordinates. The strength of the tuning curve alignment in eye- and head-centered coordinates is quantified with the indices Reye and Rhead, which reflect the average correlation between tuning curves. D: eye-centered coordinates. The three tuning curves align well in eye-centered coordinates (Reye≈1, right) and are separated by the distance between the initial eye positions in head-centered coordinates (Rhead ≈ 0, left). Gain differences across tuning curve do not contribute to the metric chosen to quantify their alignment (bottom). E: head-centered coordinates. The pattern is the opposite of D. F: hybrid-partial shift coordinates. The three tuning curves are not perfectly aligned in either head- or eye-centered coordinates, but are separated by less than the distance between the fixation locations in both coordinate systems. G: hybrid-complex coordinates. Both the shape and the alignment vary with the initial eye location. H–K: assessment of reference frame trough the statistical comparison of Reye vs. Rhead for each cell. The coordinates systems are classified as eye-centered if Reye > Rhead (orange dots) (H), head-centered if Rhead > Reye (blue) (I), hybrid-partial shift if ReyeRhead ≠ 0 (dark gray) (J), hybrid complex if ReyeRhead ≈ 0 (light gray) (K). The quantitative comparison was carried out via a bootstrap analysis (see material and methods). FEF, frontal eye field; M/LIP, lateral and medial parietal cortex; SC, superior colliculus.
Figure 2.
Figure 2.
Examples of responses in the FEF. AE: the three tuning curves from left fixation (in red), central fixation (in green), and right fixation (in blue) for an example cell during the sensory and motor period and in the visual and auditory modalities. The tuning curves are plotted in head-centered coordinates (left) and eye-centered coordinates (right). Note that sensory and motor panels have different scales. The horizontal lines indicate the baseline firing rate. The reference frame indices Rhead and Reye are indicated. See main text for a full description. FEF, frontal eye fields.
Figure 2.
Figure 2.
Examples of responses in the FEF. AE: the three tuning curves from left fixation (in red), central fixation (in green), and right fixation (in blue) for an example cell during the sensory and motor period and in the visual and auditory modalities. The tuning curves are plotted in head-centered coordinates (left) and eye-centered coordinates (right). Note that sensory and motor panels have different scales. The horizontal lines indicate the baseline firing rate. The reference frame indices Rhead and Reye are indicated. See main text for a full description. FEF, frontal eye fields.
Figure 3.
Figure 3.
Frontal eye field (FEF) single cell reference frames for visual and auditory targets. A–D: the reference frame indices in head-centered and eye-centered coordinates (Rhead and Reye) are plotted for each individual neuron’s response to visual and auditory targets, in the sensory and motor periods. Auditory sensory (A), auditory motor (B), visual sensory (C), and visual motor (D). Responses are classified and color-coded as eye-centered, head-centered, hybrid-partial shift and hybrid-complex, based on the statistical comparison between their Reye and Rhead. The histogram insets show the distribution of Reye (horizontal histogram) and Rhead (vertical histogram) across the population of cells. Percentage of auditory (E) and visual (F) responses classified as eye-centered, head-centered, hybrid-partial shift, and hybrid-complex during the sensory and motor periods. Color code as in A. Time course of the average reference frame index (means ± SE) in eye-centered coordinates (Reye) for the visual and auditory populations, aligned to target onset (G) and saccade onset (H). The Reye are calculated in bins of 100 ms, sliding with a step of 50 ms. Filled circles indicate bins in which Reye was significantly greater than Rhead (as assessed with a t test, P value <0.05). Visual data were previously presented in Caruso et al. (2).
Figure 4.
Figure 4.
M/LIP single cell reference frames for visual and auditory targets. A–F: presented in the same format as Fig. 3, A–F. Data from Mullette-Gillman et al. (10, 11). M/LIP, lateral and medial parietal cortex.
Figure 5.
Figure 5.
SC single cell reference frames for visual and auditory targets. A–F: presented in the same format as Fig. 3, A–F. Data from Lee and Groh (8). SC, superior colliculus.
Figure 6.
Figure 6.
Coordinate transformation across brain areas, modality and response period. A: distribution of Reye for visual and auditory signals in the sensory and motor periods in each area (M/LIP, FEF, and SC). B: average degree of eye-centeredness (mean Reye) across neural populations in M/LIP, FEF, and SC for visual and auditory locations, at target onset, and at saccade execution. FEF, frontal eye field; M/LIP, lateral and medial parietal cortex; SC, superior colliculus.
Figure 7.
Figure 7.
Time course of the eye-centered reference frame indices in M/LIP, FEF, and SC. The average eye-centered reference frame index, Reye (means ± SE) was computed in 100 ms bins (sliding with a step of 50 ms). Filled circles indicate that Reye was statistically larger than Rhead (see materials and methods). A: visual signals, aligned to the target onset (left) and to the saccade onset (right). B: auditory signals, aligned to the target onset (left) and to the saccade onset (right). FEF, frontal eye field; M/LIP, lateral and medial parietal cortex; SC, superior colliculus.

References

    1. Andersen RA, Mountcastle VB. The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J Neurosci 3: 532–548, 1983. doi:10.1523/JNEUROSCI.03-03-00532.1983. - DOI - PMC - PubMed
    1. Caruso VC, Pages DS, Sommer MA, Groh JM. Beyond the labeled line: variation in visual reference frames from intraparietal cortex to frontal eye fields and the superior colliculus. J Neurophysiol 119: 1411–1421, 2018. doi:10.1152/jn.00584.2017. - DOI - PMC - PubMed
    1. Chang SW, Snyder LH. Idiosyncratic and systematic aspects of spatial representations in the macaque parietal cortex. Proc Natl Acad Sci USA 107: 7951–7956, 2010. doi:10.1073/pnas.0913209107. - DOI - PMC - PubMed
    1. Chen X, Deangelis GC, Angelaki DE. Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 80: 1310–1321, 2013. doi:10.1016/j.neuron.2013.09.006. - DOI - PMC - PubMed
    1. DeSouza JF, Dukelow SP, Gati JS, Menon RS, Andersen RA, Vilis T. Eye position signal modulates a human parietal pointing region during memory-guided movements. J Neurosci 20: 5835–5840, 2000. doi:10.1523/JNEUROSCI.20-15-05835.2000. - DOI - PMC - PubMed

LinkOut - more resources