Decoding semantic sound categories in early visual cortex
- PMID: 40751662
- PMCID: PMC12317377
- DOI: 10.1093/cercor/bhaf208
Decoding semantic sound categories in early visual cortex
Abstract
Early visual cortex, once thought to be exclusively used for visual processes, has been shown to represent auditory information in the absence of visual stimulation. However, the exact information content of these representations is still unclear, as is their degree of specificity. Here, we acquired functional magnetic resonance imaging (fMRI) data while blindfolded human participants listened to 36 natural sounds, hierarchically organized into semantic categories. Multivoxel pattern analysis revealed that animate and inanimate sounds, as well as human, animal, vehicle, and object sounds could be decoded from fMRI activity patterns in early visual regions V1, V2, and V3. Further, pairwise classification of the different sound categories demonstrated that sounds produced by humans were represented in early visual cortex more distinctively than other semantic categories. Whole-brain searchlight analysis showed that sounds could be decoded also in higher level visual and multisensory brain regions. Our findings extend our understanding of early visual cortex function beyond visual feature processing and show that semantic and categorical sound information is represented in early visual cortex, potentially used to predict visual input.
Keywords: MVPA; audition; early visual cortex; fMRI; multisensory interaction.
© The Author(s) 2025. Published by Oxford University Press.
Conflict of interest statement
No conflict of interest declared.
Figures







References
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources