Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Oct 2;7(1):12500.
doi: 10.1038/s41598-017-12298-3.

Integration of acoustic and electric hearing is better in the same ear than across ears

Affiliations

Integration of acoustic and electric hearing is better in the same ear than across ears

Qian-Jie Fu et al. Sci Rep. .

Abstract

Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening. The input/output frequency range for acoustic hearing was 0.1-0.6 kHz. For CI simulations, the output frequency range was 1.2-8.0 kHz to simulate a shallow insertion depth and the input frequency range was varied to provide increasing amounts of speech information and tonotopic mismatch. Performance was best when acoustic and electric hearing was combined in the same ear. IE was significantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EAS, but not for bimodal listening. These simulation results suggest acoustic and electric hearing may be more effectively and efficiently combined within rather than across ears, and that tonotopic mismatch should be minimized to maximize the benefit of acoustic-electric hearing, especially for EAS.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Figure 1
Figure 1
Illustration of the output and input frequency ranges for simulated residual acoustic hearing (AH; white bars) and electric hearing (CI; black bars). The grey bars represent the regions where the AH and CI input frequency ranges overlap.
Figure 2
Figure 2
Spectral envelopes for the steady portion of the vowels “heed” (left column) and “hod” (right column). The black lines show the original spectral envelope. The green lines show the spectral envelope with the simulated residual acoustic hearing (AH); the input and output frequency range was 0.1–0.6 kHz. The red lines show the spectral envelope with the CI simulations; the output frequency range was 1.2–8.0 kHz and the input frequency range was varied to preserve different amounts of speech information while introducing different amounts of tonotopic mismatch.
Figure 3
Figure 3
Mean percent correct (N = 10) for overall vowel recognition (A), F1 (B), F2 (C), and duration (D). The white bars show performance with simulated residual acoustic hearing (AH), the black bars show performance with the CI simulations alone, the red bars show performance with bimodal listening, and the green bars show performance with EAS. Performance for the CI simulations alone, bimodal, and EAS are shown as a function of the CI input low-cutoff frequency. The error bars show the standard error of the mean.
Figure 4
Figure 4
Mean integration efficiency (N = 10) for bimodal (filled circles) and EAS simulations (open triangles) as a function of the CI input low-cutoff frequency. The error bars show the standard error of the mean.

Similar articles

Cited by

References

    1. Fu QJ, Zeng FG, Shannon RV, Soli SD. Importance of tonal envelope cues in Chinese speech recognition. J. Acoust. Soc. Am. 1998;104:505–510. doi: 10.1121/1.423251. - DOI - PubMed
    1. Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J. Acoust. Soc. Am. 2001;110:1150–1163. doi: 10.1121/1.1381538. - DOI - PubMed
    1. Gfeller K, et al. Recognition of familiar melodies by adult cochlear implant recipients and normal-hearing adults. Cochlear Implants Int. 2002;3:29–53. doi: 10.1179/cim.2002.3.1.29. - DOI - PubMed
    1. Shannon RV, Fu QJ, Galvin JJ., III The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta. Otolaryngol. Suppl. 2004;552:50–54. doi: 10.1080/03655230410017562. - DOI - PubMed
    1. Fu QJ, Nogaki G. Noise susceptibility of cochlear implant users: the role of spectral resolution and smearing. J. Assoc. Res. Otolaryngol. 2005;6:19–27. doi: 10.1007/s10162-004-5024-3. - DOI - PMC - PubMed

Publication types

MeSH terms

LinkOut - more resources