Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Randomized Controlled Trial
. 2012 Sep 20:13:113.
doi: 10.1186/1471-2202-13-113.

ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies

Affiliations
Randomized Controlled Trial

ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies

Deepashri Agrawal et al. BMC Neurosci. .

Abstract

Background: Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs).

Results: Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy.

Conclusions: Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.

PubMed Disclaimer

Figures

Figure 1
Figure 1
ERP waveforms for three emotional prosodies for simulated and unsimulated conditions. Average ERP waveforms recorded at the Cz electrode in original (unsimulated) and simulated conditions for all three emotional [neutral (black), angry (red) and happy (blue)] stimuli from 100 ms before onset to 500 ms after the onset of the sentences with respective scalp topographies at P200 peak (X-axis: latency in milliseconds, Y-axis: amplitude in μV). Top: N100-P200 waveform for original sentences. Middle: waveform for ACE simulations, and Bottom: waveform for PACE simulations.
Figure 2
Figure 2
Pitch contours of the three emotions. The Praat generated pitch contours of neutral (solid line), angry (dotted line) and happy prosody (dashed line) for the original (unsimulated) sentence: “Sie hat die Zeitung gelesen”.
Figure 3
Figure 3
Spectrograms of the simulated and unsimulated stimuli. Spectrograms (as deduced by Praat software) of three stimuli type for a happy sentence. Top: visible sound of the happy sentence. Bottom: spectrograms of the same sentence. Left: Original (unsimulated) sentence. Centre: ACE simulation and Right: PACE simulation.

References

    1. Ross ED. The aprosodias. Functional-anatomic organization of the affective components of language in the right hemisphere. Arch Neurol. 1981;38(9):561–569. doi: 10.1001/archneur.1981.00510090055006. - DOI - PubMed
    1. Murray IR, Arnott JL. Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion. J Acoust Soc Am. 1993;93(2):1097–1108. doi: 10.1121/1.405558. - DOI - PubMed
    1. Schroder C, Mobes J, Schutze M, Szymanowski F, Nager W, Bangert M, Munte TF, Dengler R. Perception of emotional speech in Parkinson's disease. Mov Disord. 2006;21(10):1774–1778. doi: 10.1002/mds.21038. - DOI - PubMed
    1. Nikolova ZT, Fellbrich A, Born J, Dengler R, Schroder C. Deficient recognition of emotional prosody in primary focal dystonia. Eur J Neurol. 2011;18(2):329–336. doi: 10.1111/j.1468-1331.2010.03144.x. - DOI - PubMed
    1. Chee GH, Goldring JE, Shipp DB, Ng AH, Chen JM, Nedzelski JM. Benefits of cochlear implantation in early-deafened adults: the Toronto experience. J Otolaryngol. 2004;33(1):26–31. doi: 10.2310/7070.2004.01074. - DOI - PubMed

Publication types

LinkOut - more resources