Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Dec;39(12):4913-4924.
doi: 10.1002/hbm.24333. Epub 2018 Aug 18.

Singing in the brain: Neural representation of music and voice as revealed by fMRI

Affiliations

Singing in the brain: Neural representation of music and voice as revealed by fMRI

Jocelyne C Whitehead et al. Hum Brain Mapp. 2018 Dec.

Abstract

The ubiquity of music across cultures as a means of emotional expression, and its proposed evolutionary relation to speech, motivated researchers to attempt a characterization of its neural representation. Several neuroimaging studies have reported that specific regions in the anterior temporal lobe respond more strongly to music than to other auditory stimuli, including spoken voice. Nonetheless, because most studies have employed instrumental music, which has important acoustic distinctions from human voice, questions still exist as to the specificity of the observed "music-preferred" areas. Here, we sought to address this issue by testing 24 healthy young adults with fast, high-resolution fMRI, to record neural responses to a large and varied set of musical stimuli, which, critically, included a capella singing, as well as purely instrumental excerpts. Our results confirmed that music; vocal or instrumental, preferentially engaged regions in the superior STG, particularly in the anterior planum polare, bilaterally. In contrast, human voice, either spoken or sung, activated more strongly a large area along the superior temporal sulcus. Findings were consistent between univariate and multivariate analyses, as well as with the use of a "silent" sparse acquisition sequence that minimizes any potential influence of scanner noise on the resulting activations. Activity in music-preferred regions could not be accounted for by any basic acoustic parameter tested, suggesting these areas integrate, likely in a nonlinear fashion, a combination of acoustic attributes that, together, result in the perceived musicality of the stimuli, consistent with proposed hierarchical processing of complex auditory information within the temporal lobes.

Keywords: fMRI; music; speech; singing; neural overlap; neural preference; pulse clarity.

PubMed Disclaimer

Conflict of interest statement

Authors have no conflicts of interest to declare.

Figures

Figure 1
Figure 1
(a) 2D and (b) 3D renderings of the clusters of significant activations for the contrasts [singing − speech] (red), [instrumental music − speech] (green), as well as their conjunction (white). Threshold: p = .001 (corrected for multiple comparisons at the cluster level). Group average of the responses for each condition in each cluster (left and right hemispheres), using unsmoothed data. In: Instrumental music; Si: Singing; Sp: Speech; A.U.: arbitrary units. *significant difference (p < .001) between singing and instrumental music. In all cases, singing and instrumental music elicited significantly larger responses than speech (p < .001) [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 2
Figure 2
2D (Left) and 3D (Right) renderings of the clusters of significant activations for the contrasts [singing > instrumental music] (red), [speech > instrumental music] (green), as well as their conjunction (white). Threshold: p = .001 (corrected for multiple comparisons at the cluster level) [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 3
Figure 3
Prevalence maps showing the percentage of subject‐specific significant activations at each voxel for the contrasts [singing > speech] (red scale) and [instrumental music > speech] (green scale). Clusters for singing were significantly more anterior (LH: p = .008; RH: p = .02) and lateral (LH: p = .03; RH: p = .003) than those for instrumental music [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 4
Figure 4
(a) First two components obtained in the stimulus‐specific ICA. In the second component, red and green represent positive and negative values, respectively. (b) Scatterplots of the stimulus‐specific eigenvalues corresponding to the first two ICA components. Each cross represents one stimulus: Instrumental music (red), singing (blue), and speech (green). Curves correspond to the minimum volume ellipsoid that covers all points of each category [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 5
Figure 5
Clusters of significant activations for the contrasts [instrumental music > speech] (red) and [speech > instrumental music] (green) obtained with the sparse sampling acquisition. Threshold: p = .001 (corrected for multiple comparisons at the cluster level) [Color figure can be viewed at http://wileyonlinelibrary.com]

Similar articles

Cited by

References

    1. Adank, P. (2012). Design choices in imaging speech comprehension: An activation likelihood estimation (ALE) meta‐analysis. NeuroImage, 8, 360–369. 10.1016/j.neuroimage.2012.07.027 - DOI - PubMed
    1. Alho, K. , Rinne, T. , Herron, T. J. , & Woods, D. L. (2014). Stimulus‐dependent activations and attention‐related modulations in the auditory cortex: A meta‐analysis of fMRI studies. Hearing Research, 307, 29–41. 10.1016/j.heares.2013.08.001 - DOI - PubMed
    1. Angulo‐Perkins, A. , Aubé, W. , Peretz, I. , Barrios, F. A. , Armony, J. L. , & Concha, L. (2014). Music listening engages specific cortical regions within the temporal lobes: Differences between musicians and nonmusicians. Cortex, 59, 126–137. 10.1016/j.cortex.2014.07.013 - DOI - PubMed
    1. Aubé, W. , Angulo‐Perkins, A. , Peretz, I. , Concha, L. , & Armony, J. L. (2015). Fear across the senses: Brain responses to music, vocalizations and facial expressions. Social Cognitive and Affective Neuroscience, 10, 399–407. - PMC - PubMed
    1. Armony, J. L. , Aubé, W. , Angulo‐Perkins, A. , Peretz, I. , & Concha, L. (2015). The specificity of neural responses to music and their relation to voice processing: An fMRI‐adaptation study. Neuroscience Letters, 593, 35–39. 10.1016/j.neulet.2015.03.011 - DOI - PubMed

Publication types

LinkOut - more resources