Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Feb 20;39(8):1374-1385.
doi: 10.1523/JNEUROSCI.1806-18.2018. Epub 2018 Dec 20.

Cross-Modal Competition: The Default Computation for Multisensory Processing

Affiliations

Cross-Modal Competition: The Default Computation for Multisensory Processing

Liping Yu et al. J Neurosci. .

Abstract

Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.

Keywords: computational modeling; enhancement; inhibition; integration; plasticity; superior colliculus.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
A neuro-computational model of SC multisensory integration. External visual and auditory inputs to the multisensory layers of the SC are abstractly represented as derived from either the visual (AEV) or auditory (FAES) subdivisions of the anterior ectosylvian sulcus (AES) or non-AES sources (“Visual area”, “Auditory area”). Each input region contacts principal (excitatory) neurons in the SC through competitive or cooperative pathways: the projections from non-AES sources are arranged to be functionally competitive across the modalities. This is implemented via independent excitatory projections (A and V) as well as reciprocal inhibitory synapses. A, This computation dominates the default, or naïve state that exists without covariant cross-modal experience. B, Projections from AES are arranged to be functionally cooperative (AV), moreover these input regions can suppress the native competitive mechanism through dedicated inhibitory synapses. Excitatory and inhibitory AES projections are strengthened by covariant cross-modal experience and instantiate multisensory enhancement capabilities in the neurotypic adult.
Figure 2.
Figure 2.
Typical unisensory and multisensory responses in naïve and normal SC neurons. Depicted for each of these two neurons are its responses to visual and auditory stimuli presented alone and together in spatiotemporal concordance. On the left are the impulse rasters for each response (ordered bottom-to-top), on the right are summary histograms for the average number of elicited impulses in each condition. Vertical lines through the bars represent the standard error of the mean SEM. A, The multisensory response in the naïve exemplar was not significantly greater than its largest unisensory response (here, V), and thus appeared to be insensitive to the auditory input in the VA condition. Response magnitudes (impulses/trial): V = 4.5, A = 3.65, VA = 5.1; UI = 10%, ME = 13% (multi vs uni, p = 0.307, Mann–Whitney U test). B, This result contrasts with the response pattern in the normal exemplar whose multisensory response was significantly enhanced by the auditory stimulus, becoming 117% greater than its strongest unisensory response (V). V = 6.17, A = 5.75, VA = 13.4; UI = 3%, ME=117%. **p = 7.50E−7.
Figure 3.
Figure 3.
Relationships between multisensory responses and unisensory imbalance in normal and naïve cohorts. A, Neurons from normally-reared animals produce their greatest response enhancements when the spatiotemporally concordant cues produced balanced unisensory responses. This is illustrated by the inverse relationship between ME and UI (dotted line). B, Naïve SC neurons showed a similar inverse relationship between ME and UI, but even balanced samples failed to produce significantly enhanced multisensory products, and imbalanced samples induced multisensory depression. C, Histograms summarizing the results. Vertical lines through the bars represent SEM. **p < 0.001, *p < 0.05.
Figure 4.
Figure 4.
Increasing unisensory imbalance revealed a naïve neuron's native state in which congruent cross-modal cues are treated as competitors and yield response depression. Shown are responses of a naïve neuron to visual and auditory stimuli of different intensities that produced three levels of response imbalance. Top row, The impulse rasters (left) and summary histograms (right) show that the unisensory response magnitudes differed very little. Combining the visual and auditory stimuli produced a multisensory response product that was not significantly different from the best unisensory comparator response (auditory). Response magnitudes (impulses/trial): V = 6.1, A = 7.55, VA = 8.33; UI = 11%, ME = 10% (Mann–Whitney U test, p = 0.287). Second row, The responses to the visual and auditory stimuli differed greatly, and their combination produced a response 15% below the best unisensory response. V = 2.25, A = 7.15, VA = 6.1; UI = 52%, ME = −15% (Mann–Whitney U test, p = 0.071). Bottom row, The visual and auditory response differences were greatest here, as was the level of depression produced by their combination (−25%). V = 0.6, A = 6.55, VA = 4.9; UI = 83%, ME = −25% (t test, p = 0.014). *p = 0.0198. Conventions are the same as in Figure 2.
Figure 5.
Figure 5.
Sensory training develops normal multisensory integration capabilities in the naïve animal. A, After naïve animals experience the sensory training procedure, the mean level of unisensory imbalance in their multisensory responses decreased below that of even the normal animal. B, An inverse relationship between ME and UI identified in the normal and naïve cohorts was also identified in the trained cohort. The slopes (C) and the intercepts (D) of the regression lines fitting the relationship between ME and UI was quantitatively similar in the trained and normal conditions, but were significantly different from the line fit for the naïve condition. Nevertheless, some negative ME scores like those in the naïve condition remained in the trained animals. Plotted are the means and 95% confidence intervals for the slope and intercept parameters for the three groups. *p < 0.05.
Figure 6.
Figure 6.
Stimulus configuration does not affect the multisensory interaction in the naïve condition. Plotted is the magnitude of ME in individual neurons as a function of the cross-modal spatial configuration presented in each of the three different conditions examined. Each circle represents a single neuron, and the diagonal line is the line of equality between the results obtained in response to spatially congruent and noncongruent stimulus configurations. A, In the normal condition a neuron responded to visual and auditory stimuli that were within their overlapping RFs (“spatial congruence”) with enhanced multisensory responses. However, when one of those stimuli was presented outside its RF (spatial disparity or “spatial noncongruence”), the same neuron's response was usually depressed, revealing the competition. B, In the naïve condition, (i.e., no visual-auditory experience) however, there was no relationship between a neuron's multisensory response and the spatial configuration of the visual and auditory stimuli. Both congruent and noncongruent configurations resulted in similar degrees of competition as indicated by most circles clustering around the line of equality. C, Instantiation of the normal condition was achieved with the cross-modal training program.
Figure 7.
Figure 7.
The neuro-computational model predictions closely matched the empirical results from the naïve condition. A, The neuro-computational model predicted the inverse relationship between ME and UI that was observed in the physiological data. Results displayed are 500 U from the 10,000 model simulations whose unisensory response magnitudes most closely resembled those in the empirical sample. B, The model simulations whose unisensory response levels most closely matched those observed for each empirical sample were selected from a pool of 10,000 simulations. Plotted are the visual and auditory responses (normalized units, X and + symbols, respectively) of the empirical sample (x-axis) versus those of the closest-matched model unit (y-axis). Note that the symbols fall on the line of equality showing the close match. C, Shown is the fit between the multisensory responses of each empirically-recorded neuron (x-axis, normalized) and its prediction from the model (y-axis). Note the close model-empirical match, with symbols clustering around the line of equality.
Figure 8.
Figure 8.
The neuro-computational model predictions closely matched the empirical results for the trained condition. Repeated exposure to concordant visual-auditory stimulus pairs trained convergence noncompetitive visual and auditory connections, leading to the development of multisensory enhancement capabilities in the neurons of these “VA trained” animals. A, The inverse trend in the VA Trained condition between ME and UI resembles that of the normal animal. B, C, There is also a good match between the model predictions of the (normalized) unisensory and multisensory response magnitudes of the empirical sample. Conventions are the same as in Figure 7.

Similar articles

Cited by

References

    1. Alvarado JC, Stanford TR, Vaughan JW, Stein BE (2007) Cortex mediates multisensory but not unisensory integration in superior colliculus. J Neurosci 27:12775–12786. 10.1523/JNEUROSCI.3524-07.2007 - DOI - PMC - PubMed
    1. Alvarado JC, Rowland BA, Stanford TR, Stein BE (2008) A neural network model of multisensory integration also accounts for unisensory integration in superior colliculus. Brain Res 1242:13–23. 10.1016/j.brainres.2008.03.074 - DOI - PMC - PubMed
    1. Alvarado JC, Stanford TR, Rowland BA, Vaughan JW, Stein BE (2009) Multisensory integration in the superior colliculus requires synergy among corticocollicular inputs. J Neurosci 29:6580–6592. 10.1523/JNEUROSCI.0525-09.2009 - DOI - PMC - PubMed
    1. Anastasio TJ, Patton PE, Belkacem-Boussaid K (2000) Using Bayes' rule to model multisensory enhancement in the superior colliculus. Neural Comput 12:1165–1187. 10.1162/089976600300015547 - DOI - PubMed
    1. Aslin RN, Alberts JR, Petersen MR (1981) Development of perception: psychobiological perspectives. New York: Academic.

Publication types