Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Aug:183:64-78.
doi: 10.1016/j.bandl.2018.05.002. Epub 2018 Jun 29.

Hearing and orally mimicking different acoustic-semantic categories of natural sound engage distinct left hemisphere cortical regions

Affiliations

Hearing and orally mimicking different acoustic-semantic categories of natural sound engage distinct left hemisphere cortical regions

James W Lewis et al. Brain Lang. 2018 Aug.

Abstract

Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.

Keywords: Acoustic communication; Acoustic-semantic categories; Categorical perception; Echo-mirror neuron system; Language evolution; Sound symbolism; Stroke rehabilitation; fMRI.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
A neurobiological model of the organization of the human brain for processing and recognizing different acoustic-semantic categories of natural sounds [from Brefczynski-Lewis and Lewis (2017)]. Bold text in the boxed regions depict rudimentary sound categories proposed to represent ethologically relevant categories germane to sound recognition for all mammalian species. Human speech, tool use sounds, and human-made machinery sounds are represented as extensions of these categories. Vocal and instrumental music sounds are regarded as higher forms of communication, which rely on other networks. The present study is testing the putative functional boundary (double headed arrow) of cortical networks for mimicking action sounds versus mimicking vocalizations using animal (non-conspecific) sound stimuli. Refer to text for other details.
Fig. 2.
Fig. 2.
Clustered acquisition fMRI imaging design. The animal action sound events, vocalization sound events, and silent events were presented in a pseudo-random order. However, each sound event was followed by a ‘silent period’ wherein the participant mimicked the sound they had just heard, as depicted. Stimulus and mimicry events were triggered every 10 s plus the time until the participant’s next cardiac cycle (R-wave). Refer to text for other details.
Fig. 3.
Fig. 3.
Cortical networks preferentially activated when (A–B) hearing animal action sounds versus animal vocalizations and when (C) orally mimicking those corresponding sound stimuli. Whited dotted outlines depict functional estimates of core and belt auditory cortices based on the localizer scan. (A) Data from an earlier study with timing parameters optimized for revealing intermediate auditory cortices for processing animal action sounds (yellow, pcorr < 0.001; pale yellow, pcorr < 0.01) versus animal vocalizations (red, pcorr < 0.001; transparent red, pcorr < 0.01), illustrated on inflated cortical surface models of the PALS atlas, adapted and reprinted with permission by the publisher. (B) Group-averaged fMRI results (n = 16) from the present study preferential for hearing animal actions versus vocalizations, and for (C) orally mimicking those same sounds by category (refer to color keys for corrected threshold settings). Histogram indicates the BOLD percent signal change (average ± SEM) in response to each category of sound and to oral mimicry of those corresponding sounds. Refer to text for other details. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 4.
Fig. 4.
Group-averaged activation maps resulting from ANOVA and t-test analyses revealing regions preferential for both perception of a given category of sound and for oral mimicry of that same category. (A–B) Foci derived from analyses including the 36 retained stimulus event types (from Table 1) showing maps of category- preferential foci relative to functionally derived auditory belt (light blue) and parabelt cortices (dark blue), defined using a separate localizer scan using English phonemes. Histograms illustrate the BOLD percent signal change (average ± SEM) for various regions of interest in response to each category of sound and to oral mimicry of those corresponding sounds, both relative to averaged responses to silent events. (C) Charts illustrating subject ratings of perceived difficulty for mimicking each sound stimulus. (D–E) Maps showing preferential processing to hearing and oral mimicry using the same analysis techniques but using only a subset of the sounds (panel C) that were reverse-biased in perceived difficulty to mimic. LMC = laryngeal motor cortex (estimated; overlapping with vlPC); aSTG = anterior superior temporal gyrus; S1 = primary somatosensory cortex (estimated); vlPC = ventro-lateral paracentral lobule. Refer to text for other details. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 5.
Fig. 5.
Brain regions showing parametric sensitivity to the participant’s ratings of how well they thought they orally mimicked each sound, by category (n = 14 of16 participants). (A) Whole-brain primary level analysis showing the most strongly activated regions that were parametrically correlated with perceived mimic quality (see color key for thresholds). Poorer mimicry was generally associated with greater activation. The right precentral gyrus focus (Talairach x = 45, y= –14, z = 52; 835 mm3) and right middle cingulate (x = 11,y = —10, z = 42; 1090 mm3) showed the strongest degree of linear correlation between perceived mimic quality and BOLD signal brain responses. (B) Several group-averaged ROIs from Fig. 4 also showed significant parametric activation correlated with perceived mimic quality, and with some areas showing dependence on the category of sound. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Similar articles

Cited by

References

    1. Arbib MA (2005). From monkey-like action recognition to human language: an evolutionary framework for neurolinguistics. Behavioral and Brain Sciences, 28, 105–124 (discussion 125–167). - PubMed
    1. Asano M, Imai M, Kita S, Kitajo K, Okada H, &Thierry G (2015). Sound symbolism scaffolds language development in preverbal infants. Cortex, 63, 196–205. - PubMed
    1. Baddeley AD (1986). Working memory. In Gazzaniga MS (Ed.). Principles of neuroscience. MIT Press: Clarendon Press.
    1. Baumann S, Griffiths TD, Rees A, Hunter D, Sun L, & Thiele A (2010). Characterisation of the BOLD response time course at different levels of the auditory pathway in non-human primates. Neuroimage, 50, 1099–1108. - PMC - PubMed
    1. Belin P, Zatorre RJ, Lafaille P, Ahad P, & Pike B (2000). Voice-selective areas in human auditory cortex. Nature, 403, 309–312. - PubMed

Publication types