Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Randomized Controlled Trial
. 2011 Jul;21(7):1507-18.
doi: 10.1093/cercor/bhq198. Epub 2010 Nov 11.

Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns

Affiliations
Randomized Controlled Trial

Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns

Daniel A Abrams et al. Cereb Cortex. 2011 Jul.

Abstract

Music and speech are complex sound streams with hierarchical rules of temporal organization that become elaborated over time. Here, we use functional magnetic resonance imaging to measure brain activity patterns in 20 right-handed nonmusicians as they listened to natural and temporally reordered musical and speech stimuli matched for familiarity, emotion, and valence. Heart rate variability and mean respiration rates were simultaneously measured and were found not to differ between musical and speech stimuli. Although the same manipulation of temporal structure elicited brain activation level differences of similar magnitude for both music and speech stimuli, multivariate classification analysis revealed distinct spatial patterns of brain responses in the 2 domains. Distributed neuronal populations that included the inferior frontal cortex, the posterior and anterior superior and middle temporal gyri, and the auditory brainstem classified temporal structure manipulations in music and speech with significant levels of accuracy. While agreeing with previous findings that music and speech processing share neural substrates, this work shows that temporal structure in the 2 domains is encoded differently, highlighting a fundamental dissimilarity in how the same neural resources are deployed.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Music and speech stimuli. Examples of normal and reordered speech (left) and music (right) stimuli. The top and middle panels include an oscillogram of the waveform (top) and a sound spectrogram (bottom). Frequency spectra of the normal and reordered stimuli are plotted at the bottom of each side.
Figure 2.
Figure 2.
Equivalence of physiological measures by experimental condition. (A) Mean breaths per minute for each stimulus type. (B) HRV for each stimulus type as indexed by the mean of individual participants’ standard deviations over the course of the experiment. There were no significant differences within or across stimulus types.
Figure 3.
Figure 3.
Activation to music and speech. Surface rendering and axial slice (Z = −2) of cortical regions activated by music and speech stimuli show strong responses in the IFC and the superior and middle temporal gyri. The contrast used to generate this figure was (speech + reordered speech + music + reordered music) – rest. This image was thresholded using a voxel-wise statistical height threshold of (P < 0.01), with FWE corrections for multiple spatial comparisons at the cluster level (P < 0.05). Functional images are superimposed on a standard brain from a single normal subject (MRIcroN: ch2bet.nii.gz).
Figure 4.
Figure 4.
MPA of temporal structure in music and speech. (AB) Classification maps for temporal structure in music and speech superimposed on a standard brain from a single normal subject. (C) Color coded location of IFC ROIs. (D) Maximum classification accuracies in BAs 44 (yellow), 45 (brown), and 47 (cyan). Cross hair indicates voxel with maximum classification accuracy.
Figure 5.
Figure 5.
MPA of temporal structure in music and speech. (AC) Classification maps for temporal structure in music and speech superimposed on a standard brain from a single normal subject. (D) Maximum classification accuracies for PT (pink), HG (cyan), and PP (orange) in the superior temporal plane. (E) Color coded location of temporal lobe ROIs. (F) Maximum classification accuracies for pSTG (yellow), pMTG (red), aSTG (white), aMTG (blue), and tPole (green) in middle and superior temporal gyri as well as the temporal pole. a, anterior; p, posterior; tPole, temporal pole.
Figure 6.
Figure 6.
MPA of temporal structure in music and speech. Classification maps for brainstem regions (A) cochlear nucleus (cyan) and (B) inferior colliculus (green) superimposed on a standard brain from a single normal subject (MRIcroN: ch2.nii.gz).
Figure 7.
Figure 7.
ROI signal change analysis. Percentage signal change in ROIs for music structure (blue) and speech structure (red) conditions. ROIs were constructed using superthreshold voxels from the classification analysis in 11 frontal and temporal cortical regions bilaterally. There were no significant differences in signal change to temporal structure manipulations in music and speech. TP, temporal pole.

Similar articles

Cited by

References

    1. Abrams DA, Nicol T, Zecker S, Kraus N. Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech. J Neurosci. 2008;28:3958–3965. - PMC - PubMed
    1. Abrams DA, Nicol T, Zecker S, Kraus N. Abnormal cortical processing of the syllable rate of speech in poor readers. J Neurosci. 2009;29:7686–7693. - PMC - PubMed
    1. Anwander A, Tittgemeyer M, von Cramon DY, Friederici AD, Knosche TR. Connectivity-based parcellation of Broca’s area. Cereb Cortex. 2007;17:816–825. - PubMed
    1. Banai K, Hornickel J, Skoe E, Nicol T, Zecker S, Kraus N. Reading and subcortical auditory function. Cereb Cortex. 2009;19:2699–2707. - PMC - PubMed
    1. Banai K, Nicol T, Zecker SG, Kraus N. Brainstem timing: implications for cortical processing and literacy. J Neurosci. 2005;25:9850–9857. - PMC - PubMed

Publication types