Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Aug 11:6:1138.
doi: 10.3389/fpsyg.2015.01138. eCollection 2015.

The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

Affiliations

The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

Arianna N LaCroix et al. Front Psychol. .

Abstract

The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

Keywords: Broca's area; fMRI; meta-analysis; music perception; speech perception.

PubMed Disclaimer

Figures

Figure 1
Figure 1
(A) Representative sagittal slices of the ALE for passive listening to speech, p < 0.05, corrected, overlaid on top of the passive music listening ALE. (B) Speech vs. music passive listening contrasts results, p < 0.05 corrected.
Figure 2
Figure 2
Representative sagittal slices of the ALEs for the (A) music discrimination, (B) music error detection and (C) music memory task conditions, p < 0.05, corrected, overlaid on top of the passive music listening ALE for comparison.
Figure 3
Figure 3
Representative slices of the contrast results for the comparison of (A) music discrimination, (B) music error detection, and (C) music memory task conditions, compared to the corresponding speech task, p < 0.05, corrected.
Figure 4
Figure 4
Representative slices of the contrast results for the comparison of (A) music discrimination, (B) music error detection, (C) music memory task conditions, compared to passive listening to speech, p < 0.05, corrected.
Figure 5
Figure 5
Representative slices of the contrast results for the comparison of (A) music discrimination, (B) music error detection, (C) music memory task conditions, compared to passive listening to music, p < 0.05, corrected.

References

    1. Abrams D. A., Bhatara A., Ryali S., Balaban E., Levitin D. J., Menon V. (2011). Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns. Cereb. Cortex 21, 1507–1518. 10.1093/cercor/bhq198 - DOI - PMC - PubMed
    1. Adank P. (2012). Design choices in imaging speech comprehension: an activation likelihood estimation (ALE) meta-analysis. Neuroimage 63, 1601–1613. 10.1016/j.neuroimage.2012.07.027 - DOI - PubMed
    1. Amunts K., Schleicher A., Bürgel U., Mohlberg H., Uylings H. B. M., Zilles K. (1999). Broca's region revisited: cytoarchitecture and intersubject variability. J. Comp. Neurol. 412, 319–341. - PubMed
    1. Anwander A., Tittgemeyer M., von Cramon D. Y., Friederici A. D., Knösche T. R. (2007). Connectivity-based parcellation of Broca's area. Cereb. Cortex 17, 816–825. 10.1093/cercor/bhk034 - DOI - PubMed
    1. Baker E., Blumstein S. E., Goodglass H. (1981). Interaction between phonological and semantic factors in auditory comprehension. Neuropsychology 19, 1–15. 10.1016/0028-3932(81)90039-7 - DOI - PubMed

LinkOut - more resources