Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody
- PMID: 32108113
- DOI: 10.1126/science.aaz3468
Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody
Abstract
Does brain asymmetry for speech and music emerge from acoustical cues or from domain-specific neural networks? We selectively filtered temporal or spectral modulations in sung speech stimuli for which verbal and melodic content was crossed and balanced. Perception of speech decreased only with degradation of temporal information, whereas perception of melodies decreased only with spectral degradation. Functional magnetic resonance imaging data showed that the neural decoding of speech and melodies depends on activity patterns in left and right auditory regions, respectively. This asymmetry is supported by specific sensitivity to spectrotemporal modulation rates within each region. Finally, the effects of degradation on perception were paralleled by their effects on neural classification. Our results suggest a match between acoustical properties of communicative signals and neural specializations adapted to that purpose.
Copyright © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Comment in
-
Splitting speech and music.Science. 2020 Feb 28;367(6481):974-976. doi: 10.1126/science.aba7913. Science. 2020. PMID: 32108099 No abstract available.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
