Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2005 Nov;26(3):157-69.
doi: 10.1002/hbm.20147.

Cerebral mechanisms of prosodic sensory integration using low-frequency bands of connected speech

Affiliations

Cerebral mechanisms of prosodic sensory integration using low-frequency bands of connected speech

Isabelle Hesling et al. Hum Brain Mapp. 2005 Nov.

Abstract

Even if speech perception has been reported to involve both left and right hemispheres, converging data have posited the existence of a functional asymmetry at the level of secondary auditory cortices. Using fMRI in 12 right-handed French men listening passively to long connected speech stimuli, we addressed the question of neuronal networks involved in the integration of low frequency bands of speech by comparing 1) differences in brain activity in two listening conditions (FN, NF) differing in the integration of pitch modulations (in FN, low frequencies, obtained by a low-pass filter, are addressed to the left ear while the whole acoustic message is simultaneously addressed to the right ear, NF being the reverse position); 2) differences in brain activity induced by high and low degrees of prosodic expression (expressive vs. flat); and 3) effects of the same connected speech stimulus in the two listening conditions. Each stimulus induced a specific cerebral network, the flat one weakening activations which were mainly reduced to the bilateral STG for both listening conditions. In the expressive condition, the specific sensory integration FN results in an increase of the articulatory loop and new recruitments such as right BA6-44, left BA39-40, the left posterior insula and the bilateral BA30. This finding may be accounted for by the existence of temporal windows differing both in length and in acoustic cues decoding, strengthening the "asymmetric sampling in time" hypothesis posited by Poeppel (Speech Commun 2003; 41:245-255). Such an improvement of prosodic integration could find applications in the rehabilitation of some speech disturbances.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Pitch modulations of the first 3 s extracted with WaveSurfer Software [Sjölander and Beskow, 2000]. The upper image (A) illustrates the pitch contour of the expressive speech presentation. The lower image (B) illustrates the pitch contour of the toneless speech presentation.
Figure 2
Figure 2
Functional ROIs on normalized and smoothed Subject #4 (right and left BA 6, BA 44, BA 39‐40, and BA 41‐42, 22, 21, 38 (STG)).
Figure 3
Figure 3
Patterns of activation elicited by the two following listening conditions: filtered speech stimulus to the left ear and normal speech stimulus to the right ear (FN), normal speech stimulus to the left ear and filtered speech stimulus to the right ear (NF), in a prosodic connected speech presentation and a flat one. A: Expressive speech presentation: filtered speech stimulus to the left ear and normal speech stimulus to the right ear (FN), Z ranging from 5.80–13.72. B: Expressive speech presentation in normal speech stimulus to the left ear and filtered speech stimulus to the right ear (NF), Z ranging from 3.66–15.68. C: Flat speech presentation: filtered speech stimulus to the left ear and normal speech stimulus to the right ear (FN), Z ranging from 6.40–12.30. D: Flat speech presentation: normal speech stimulus to the left ear and filtered speech stimulus to the right ear (NF), Z ranging from 3.89–13.28.
Figure 4
Figure 4
Plots of means of ROI × Listening condition × Hemisphere × Prosody (A: group analysis and B: individual analysis, the STG—BA 41‐42, BA 22, BA 21, BA 38—being pooled).

Similar articles

Cited by

References

    1. Baddeley AD, Emslie H, Nimmo‐Smith I (1992): Speed and capacity of language processing (SCOLP) test. Bury St Edmunds (UK): Thames Valley Test Co.
    1. Belin P, Zilbovicius M, Crozier S, Thivard L, Fontaine A, Masure MC, Samson Y (1998): Lateralization of speech and auditory temporal pocessing. J Cogn Neurosci 10: 536–540. - PubMed
    1. Belin P, Zatorre RJ, Lafaille P, Ahad P, Pike B (2000): Voice‐selective areas in human auditory cortex. Nature 403: 309–312. - PubMed
    1. Binder JR, Frost MS (1998): Functional MRI studies of language processing in the brain. Neurosci News 1: 15–23.
    1. Binder JR, Frost JA, Hammeke TA, Rao SM, Cox RW (1996): Function of the left planum temporale in auditory and linguistic processing. Brain 119: 1239–1247. - PubMed

Publication types

MeSH terms