Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2008;13(2):105-12.
doi: 10.1159/000111782. Epub 2007 Nov 29.

The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies

Affiliations

The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies

Michael F Dorman et al. Audiol Neurootol. 2008.

Abstract

Fifteen patients fit with a cochlear implant in one ear and a hearing aid in the other ear were presented with tests of speech and melody recognition and voice discrimination under conditions of electric (E) stimulation, acoustic (A) stimulation and combined electric and acoustic stimulation (EAS). When acoustic information was added to electrically stimulated information performance increased by 17-23 percentage points on tests of word and sentence recognition in quiet and sentence recognition in noise. On average, the EAS patients achieved higher scores on CNC words than patients fit with a unilateral cochlear implant. While the best EAS patients did not outperform the best patients fit with a unilateral cochlear implant, proportionally more EAS patients achieved very high scores on tests of speech recognition than unilateral cochlear implant patients.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Mean audiogram for 15 patients with low-frequency residual hearing.
Fig. 2
Fig. 2
Mean scores and standard deviations for E, A and EAS conditions. Significant differences among conditions are indicated, e.g., E > A, for each type of test material. a CNC words. b Consonants. c Vowels. d Sentences – quiet. e Sentences +10 dB SNR. f Sentences +5 dB SNR.
Fig. 3
Fig. 3
Mean scores and standard deviations for E, A and EAS conditions. Significant differences among conditions are indicated, e.g., E > A, for each type of test material. a Melodies. b Voice – within. c Voice – between.
Fig. 4
Fig. 4
Percent correct CNC words for conventional implant patients in Helms et al. [1997] and for the patients in the current study using E and EAS stimulation. Each dot indicates the performance of a single patient. Group mean scores are indicated by a horizontal line.
Fig. 5
Fig. 5
Percent correct scores for patients, fit with conventional cochlear implants, who scored 50% correct or better on CNC words (average and above) and for patients in the current study using EAS. Each dot indicates the performance of a single patient. a CNC words. b Consonants. c Vowels. d Sentences – quiet. e Sentences +10 dB SNR. f Sentences +5 dB SNR.
Fig. 6
Fig. 6
Percent correct scores for patients, fit with conventional cochlear implants, who scored 50% correct or better on CNC words (average and above) and for patients in the current study using EAS. Each dot indicates the performance of a single patient. a Melody recognition. b Voice – within. c Voice – between.

References

    1. Brown CA, Bacon SP. The effect of fundamental frequency on speech intelligibility in simulated electric-acoustic listening. J Acoust Soc Am. 2007;121:3039.
    1. Chang J, Bai J, Zeng F-G. Unintelligible low-frequency sound enhances simulated cochlear-implant speech recognition in noise. IEEE Trans Biomed Eng. 2006;53:2598–2601. - PubMed
    1. Childers D, Wu K. Gender recognition from speech. II. Fine analysis. J Acoust Soc Am. 1991;90:1841–1856. - PubMed
    1. Ching T, Incerti P, Hill M. Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear Hear. 2004;25:9–21. - PubMed
    1. Clopper CG, Carter AK, Dillon CM, Hernandez LR, Pisoni DB, Clarke CM, Harnsberger JD, Herman R. The Indiana Speech Project: an overview of the development of a multi-talker multi-dialect speech corpus. Bloomington, Speech Research Laboratory, Indiana University, Research on Speech Perception Progress Report. 2002;25:367–380.

Publication types