Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan:427:108649.
doi: 10.1016/j.heares.2022.108649. Epub 2022 Nov 13.

Differences in neural encoding of speech in noise between cochlear implant users with and without preserved acoustic hearing

Affiliations

Differences in neural encoding of speech in noise between cochlear implant users with and without preserved acoustic hearing

Hwan Shim et al. Hear Res. 2023 Jan.

Abstract

Cochlear implants (CIs) have evolved to combine residual acoustic hearing with electric hearing. It has been expected that CI users with residual acoustic hearing experience better speech-in-noise perception than CI-only listeners because preserved acoustic cues aid unmasking speech from background noise. This study sought neural substrate of better speech unmasking in CI users with preserved acoustic hearing compared to those with lower degree of acoustic hearing. Cortical evoked responses to speech in multi-talker babble noise were compared between 29 Hybrid (i.e., electric acoustic stimulation or EAS) and 29 electric-only CI users. The amplitude ratio of evoked responses to speech and noise, or internal SNR, was significantly larger in the CI users with EAS. This result indicates that CI users with better residual acoustic hearing exhibit enhanced unmasking of speech from background noise.

Keywords: Cochlear implants; Electric acoustic stimulation (EAS); Electroencephalography (EEG); Speech unmasking; Speech-in-noise.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Comparison of demographic and audiometric factors. Red bars represent the histogram of EAS users while blue bars depict E-only users. Acoustic thresholds were averaged across 250 and 500 Hz. All the comparisons except the duration of hearing loss exhibited a significant difference (Wilcoxon ranksum test, * : p < 0.05, ** : p < 0.01, *** : p < 0.001, N.S.: p > 0.05).
Figure 2.
Figure 2.
A. An example waveform of stimuli. Gray: Noise, Green: Target word. B. Grand-average evoked responses measured in global field power (GFP) from +7dB SNR condition. The vertical lines around the peak error bar depict standard errors across subjects. Two different peaks from noise onset period (0.05 – 0.25 seconds) and target onset period (1.05 – 1.25 seconds) were detected and used for calculation of the internal SNR. Red: EAS users, Blue: E-only users. C. Comparison of internal SNRs between the groups of EAS (red) and E-only (blue) users. The three horizontal lines in each box represent 75 percentile, median, and 25 percentiles. Filled circles represent individual subjects’ internal SNR values. Asterisk (*) indicates a statistically significant difference between groups (two-sample t-test, p = 0.019).
Figure 3.
Figure 3.
A. Grand average GFP time courses before (dashed curves) and after (solid curves) artifact removal. B. Grand average topographies at the GFP peak positions following the noise onset. C. Grand average topographies at the GFP peak positions following the speech onset (after artifact removal).
Figure 4.
Figure 4.
Relationship between the peak GFP amplitudes before and after artifact removal.
Figure 5.
Figure 5.
A. An example waveform of stimuli. Gray: Noise, Green: Target word. B. Grand-average evoked responses measured in global field power (GFP) from +13 dB SNR condition. C. Comparison of internal SNRs between the groups of EAS (red) and E-only (blue) users.

Similar articles

Cited by

References

    1. Anderson S, & Kraus N. (2010). Objective Neural Indices of Speech-in-Noise Perception. Trends in Amplification, 14(2), 73–83. 10.1177/1084713810380227 - DOI - PMC - PubMed
    1. Anderson S, & Parbery-Clark A. (2013). Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance. Journal of Speech, Language, and Hearing Research, 56(February), 31–44. 10.1044/1092-4388(2012/120043)in - DOI - PMC - PubMed
    1. Arnal LH, & Giraud A-L (2012). Cortical oscillations and sensory predictions. Trends in Cognitive Sciences, 16(7), 390–398. 10.1016/j.tics.2012.05.003 - DOI - PubMed
    1. Berger JI, Gander PE, Kim S, Schwalje AT, Woo J, Na Y, Holmes A, Hong J, Dunn C, Hansen M, Gantz B, McMurray B, Griffiths TD, & Choi I. (2021). Neural correlates of individual differences in speech-in-noise performance in a large cohort of cochlear implant users. BioRxiv, 2021.04.22.440998. 10.1101/2021.04.22.440998 - DOI - PMC - PubMed
    1. Bonnard D, Schwalje A, Gantz B, & Choi I. (2018). Electric and acoustic harmonic integration predicts speech-in-noise performance in hybrid cochlear implant users. Hearing Research, 367, 223–230. 10.1016/j.heares.2018.06.016 - DOI - PMC - PubMed