Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Jun 23;5(6):e11283.
doi: 10.1371/journal.pone.0011283.

Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition

Affiliations

Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition

David Alais et al. PLoS One. .

Abstract

Background: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.

Methodology/principal findings: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.

Conclusions/significance: The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Temporal order discrimination thresholds measured in each of three sensory modalities (visual, auditory and audio-visual) across 10 separate days.
Data points show group means and error bars show ±1 standard error of the mean.
Figure 2
Figure 2. Proportional improvements in TOJ threshold performance measured for the various stimulus modalities and features as a consequence of different intervening training tasks.
The bars plot group means and error bars show ±1 standard error of the mean. Black bars represent TOJ improvement following visual onset training, white bars, auditory onset training, and grey bars, audio-visual onset training. Asterisks indicate significant threshold improvement (α<.05). (a) Comparison of the generalisability of onset learning within and between stimulus modalities. Triangles signify within-modality improvement. Note that the only instance of between-modality improvement occurred for visual onset tasks following audio-visual onset training. (b) Comparison of the generalisabilty of onset learning to other stimulus features. Whereas visual and audio-visual learning generalised across both orientation and location to visual onset judgments, auditory learning failed to generalise to other frequencies. Note also the lack of generalisation from onsets to offsets.
Figure 3
Figure 3. Temporal structure of stimuli in each of the three training conditions.
Top: visual onset training; middle: auditory onset training; bottom: audio-visual onset training. Visual and auditory stimuli are represented as black and white curves respectively. Each stimulus condition is composed of two targets: left vs. right of fixation (visual onset condition); to left vs right ears (auditory onset condition); and visually vs. auditorily (audio-visual onset condition). Within each trial, target increment onsets (dotted curves) are delayed with respect to each other by “target onset asynchrony” (vertical shaded region) and are linearly summed with a pedestal presented throughout the trial (solid curves). As described in the Methods, each target/pedestal combination is amplitude modulated at 2.5 Hz throughout the trial, with a temporal phase difference of 180° applied to each.

References

    1. Mauk MD, Buonomano DV. The neural basis of temporal processing. Annu Rev Neurosci. 2004;27:307–340. - PubMed
    1. Michon JA. The complete time experiencer. In: Michon JA, Jackson JL, editors. Time, Mind and Behavior. Berlin: Springer-Verlag; 1985. pp. 21–52.
    1. Edwards CJ, Alder TB, Rose GJ. Auditory midbrain neurons that count. Nat Neurosci. 2002;5:934–936. - PubMed
    1. Schirmer A. Timing speech: a review of lesion and neuroimaging findings. Brain Res Cogn Brain Res. 2004;21:269–287. - PubMed
    1. Alais D, Newell FN, Mamassian P. Multisensory processing in review: from physiology to behaviour. Seeing Perceiving. 2010;23:3–38. - PubMed