Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Sep-Oct;37(5):582-92.
doi: 10.1097/AUD.0000000000000298.

Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech

Affiliations

Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech

Soo Hee Oh et al. Ear Hear. 2016 Sep-Oct.

Abstract

Objectives: Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences.

Design: Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined.

Results: Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear.

Conclusions: Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners' ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Summary of activities completed during each of seven test sessions.
Figure 2
Figure 2
Mean percent-correct word recognition scores across 6 stimulus conditions and 3 listening conditions for 12 subjects. Error bars indicate ± 1 standard error of the mean.
Figure 3
Figure 3
Comparison of bimodal benefit in the gV+gLPsp listening condition for CUNY and IEEE sentences, for the 12- and 16-channel conditions. Benefit is shown as normalized gain (right) and percentage-point gain (left).
Figure 4
Figure 4
Individual subjects’ percentage-point gain scores (left panel) and normalized gain scores (right panel) as a function of baseline performance for CUNY and IEEE sentences. Scores in the left panel were restricted in baseline value (25–63%) to satisfy statistical requirements (see text).

References

    1. Bashford JA, Riener KR, Warren RM. Increasing the intelligibility of speech through multiple phonemic restorations. Perception Psychophysics. 1992;51:211–217. - PubMed
    1. Başkent D, Chatterjee M. Recognition of temporally interrupted and spectrally degraded sentences with additional unprocessed low-frequency speech. Hearing Research. 2010;270:127–133. - PMC - PubMed
    1. Başkent D. Effect of speech degradation on top-down repair: phonemic restoration with simulations of cochlear implants and combined electric-acoustic stimulation. Journal of the Association for Research in Otolaryngology. 2012;13:683–692. - PMC - PubMed
    1. Boothroyd A, Hanin L, Hnath T. A sentence test of speech perception: reliability, set equivalence, and short term learning. New York: Speech and Hearing Sciences Research Center, City University of New York; 1985.
    1. Boersma P, Weenink D. PRAAT: doing phonetics by computer [computer program] 2009 Version 5.1.05.