Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 7;33(4):948-958.
doi: 10.1093/cercor/bhac113.

Noise-rearing precludes the behavioral benefits of multisensory integration

Affiliations

Noise-rearing precludes the behavioral benefits of multisensory integration

Naomi L Bean et al. Cereb Cortex. .

Abstract

Concordant visual-auditory stimuli enhance the responses of individual superior colliculus (SC) neurons. This neuronal capacity for "multisensory integration" is not innate: it is acquired only after substantial cross-modal (e.g. auditory-visual) experience. Masking transient auditory cues by raising animals in omnidirectional sound ("noise-rearing") precludes their ability to obtain this experience and the ability of the SC to construct a normal multisensory (auditory-visual) transform. SC responses to combinations of concordant visual-auditory stimuli are depressed, rather than enhanced. The present experiments examined the behavioral consequence of this rearing condition in a simple detection/localization task. In the first experiment, the auditory component of the concordant cross-modal pair was novel, and only the visual stimulus was a target. In the second experiment, both component stimuli were targets. Noise-reared animals failed to show multisensory performance benefits in either experiment. These results reveal a close parallel between behavior and single neuron physiology in the multisensory deficits that are induced when noise disrupts early visual-auditory experience.

Keywords: cross-modal; development; multisensory integration; noise-rearing; vision.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Apparatus and training performance. A) The detection and localization task was performed in a perimetry apparatus with LEDs and speakers at locations spanning the central 180° of space in 15° intervals (only the central 120° was tested here, the 0° location was used for fixation only). Each stimulus location contained a complex of 2 speakers and 3 LEDs at 2-cm separations. Large speakers mounted above the device delivered background noise. (Figure adapted from Gingras et al. 2009). B) Animals of both cohorts quickly learned to orient and approach visual (prior to Experiment 1) and auditory stimuli (prior to Experiment 2). Each animal’s performance is plotted individually (cat 1–5). Both normally reared and noise-reared animals learned the visual (top) and auditory (bottom) tasks rapidly, and there were no significant intergroup differences.
Fig. 2
Fig. 2
Auditory stimuli failed to enhance visual localization performance in noise-reared animals. A and B) Bars show that coupling a novel auditory stimulus with the visual target stimulus (V) to create a cross-modal target (VA) significantly enhanced group multisensory performance (MEv) in normally reared animals, but not in their noise-reared counterparts. Open circles represent individual animal data with lines connecting their unisensory and multisensory performance. Insets shows the multisensory effect on d′. C) Z scores in boxplots for each location and each animal (gray dots) show multisensory, relative to visual, localization performance. The multisensory performance of normally reared animals was always significantly enhanced. In contrast, the multisensory performance of noise-reared animals (in gray shading) was often no better than their visual performance. D) Central (C) and peripheral (P) errors are expressed as degrees of deviation from the target (0) in response to modality-specific (thin lines; visual in blue, auditory in red) and cross-modal (thick line, purple) stimuli. Shading illustrates enhanced performance. ***P < 0.001, ns = not significant.
Fig. 3
Fig. 3
Noise-reared animals failed to show ME when both visual and auditory stimuli were targets. Conventions are the same as Fig. 2, albeit here the referent is SF. A) The multisensory performance in normally reared animals significantly exceeded SF. B) In contrast, the multisensory performance of noise-reared animals failed to reach SF predictions. C) Z scores show the contrasting performance of the groups: Enhancements in normally reared and depression in noise-reared (gray shading) animals. D) Performance to modality-specific stimuli (lower thin lines; visual in blue, auditory in red) were similar between groups. Cross-modal performance in the normally-reared group (thick line, purple) was above SF (green line). Shading highlights enhancement for this group (purple). Cross-modal performance of noise-reared animals was below SF. Shading highlights this deficit (green). ***P < 0.001.
Fig. 4
Fig. 4
Group performance was stable over the testing period. Shown are visual (blue), auditory (red), and multisensory (purple) localization performance and SF predictions (green, Experiment 2). There was relative within-experiment performance stability over testing sessions (albeit noise-reared animals showed a gradual increase in response to the visual stimuli in Experiment 2). However, following explicit auditory training between experiments both cohorts showed an increase in correct multisensory approach responses.

References

    1. Alais D, Burr D. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol. 2004:14:257–262. - PubMed
    1. Alvarado JC, Stanford TR, Vaughan JW, Stein BE. Cortex mediates multisensory but not unisensory integration in superior colliculus. J Neurosci. 2007:27:12775–12786. - PMC - PubMed
    1. Avillac M, Ben Hamed S, Duhamel J-R. Multisensory integration in the ventral intraparietal area of the macaque monkey. J Neurosci. 2007:27:1922–1932. - PMC - PubMed
    1. Barraclough NE, Xiao D, Baker CI, Oram MW, Perrett DI. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. J Cogn Neurosci. 2005:17:377–391. - PubMed
    1. Battaglia PW, Jacobs RA, Aslin RN. Bayesian integration of visual and auditory signals for spatial localization. J Opt Soc Am. 2003:20:1391–1397. - PubMed

Publication types