Cross-modal perceptual enhancement of unisensory targets is uni-directional and does not affect temporal expectations
- PMID: 34757275
- DOI: 10.1016/j.visres.2021.107962
Cross-modal perceptual enhancement of unisensory targets is uni-directional and does not affect temporal expectations
Abstract
Temporal structures in the environment can shape temporal expectations (TE); and previous studies demonstrated that TEs interact with multisensory interplay (MSI) when multisensory stimuli are presented synchronously. Here, we tested whether other types of MSI - evoked by asynchronous yet temporally flanking irrelevant stimuli - result in similar performance patterns. To this end, we presented sequences of 12 stimuli (10 Hz) which consisted of auditory (A), visual (V) or alternating auditory-visual stimuli (e.g. A-V-A-V-…) with either auditory or visual targets (Exp. 1). Participants discriminated target frequencies (auditory pitch or visual spatial frequency) embedded in these sequences. To test effects of TE, the proportion of early and late temporal target positions was manipulated run-wise. Performance for unisensory targets was affected by temporally flanking distractors, with auditory temporal flankers selectively improving visual target perception (Exp. 1). However, no effect of temporal expectation was observed. Control experiments (Exp. 2-3) tested whether this lack of TE effect was due to the higher presentation frequency in Exp. 1 relative to previous experiments. Importantly, even at higher stimulation frequencies redundant multisensory targets (Exp. 2-3) reliably modulated TEs. Together, our results indicate that visual target detection was enhanced by MSI. However, this cross-modal enhancement - in contrast to the redundant target effect - was still insufficient to generate TEs. We posit that unisensory target representations were either instable or insufficient for the generation of TEs while less demanding MSI still occurred; highlighting the need for robust stimulus representations when generating temporal expectations.
Keywords: Audio-visual; Colavita; Context-dependence; Cross-modal; Temporal expectations; Visual dominance.
Copyright © 2021 Elsevier Ltd. All rights reserved.
Similar articles
-
Context dependency of time-based event-related expectations for different modalities.Psychol Res. 2022 Jun;86(4):1239-1251. doi: 10.1007/s00426-021-01564-9. Epub 2021 Jul 28. Psychol Res. 2022. PMID: 34319439 Free PMC article.
-
The role of multisensory interplay in enabling temporal expectations.Cognition. 2018 Jan;170:130-146. doi: 10.1016/j.cognition.2017.09.015. Epub 2017 Oct 9. Cognition. 2018. PMID: 28992555
-
Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma-band oscillations.Neuropsychologia. 2007 Feb 1;45(3):561-71. doi: 10.1016/j.neuropsychologia.2006.01.013. Epub 2006 Mar 20. Neuropsychologia. 2007. PMID: 16542688
-
Explaining the Colavita visual dominance effect.Prog Brain Res. 2009;176:245-58. doi: 10.1016/S0079-6123(09)17615-X. Prog Brain Res. 2009. PMID: 19733761 Review.
-
The efficacy of single-trial multisensory memories.Multisens Res. 2013;26(5):483-502. doi: 10.1163/22134808-00002426. Multisens Res. 2013. PMID: 24649531 Review.
Cited by
-
Context dependency of time-based event-related expectations for different modalities.Psychol Res. 2022 Jun;86(4):1239-1251. doi: 10.1007/s00426-021-01564-9. Epub 2021 Jul 28. Psychol Res. 2022. PMID: 34319439 Free PMC article.
-
Minimal interplay between explicit knowledge, dynamics of learning and temporal expectations in different, complex uni- and multisensory contexts.Atten Percept Psychophys. 2021 Aug;83(6):2551-2573. doi: 10.3758/s13414-021-02313-1. Epub 2021 May 11. Atten Percept Psychophys. 2021. PMID: 33977407 Free PMC article.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources