Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Nov/Dec;40(6):1316-1327.
doi: 10.1097/AUD.0000000000000712.

Factors Affecting Bimodal Benefit in Pediatric Mandarin-Speaking Chinese Cochlear Implant Users

Affiliations

Factors Affecting Bimodal Benefit in Pediatric Mandarin-Speaking Chinese Cochlear Implant Users

Yang-Wenyi Liu et al. Ear Hear. 2019 Nov/Dec.

Abstract

Objectives: While fundamental frequency (F0) cues are important to both lexical tone perception and multitalker segregation, F0 cues are poorly perceived by cochlear implant (CI) users. Adding low-frequency acoustic hearing via a hearing aid in the contralateral ear may improve CI users' F0 perception. For English-speaking CI users, contralateral acoustic hearing has been shown to improve perception of target speech in noise and in competing talkers. For tonal languages such as Mandarin Chinese, F0 information is lexically meaningful. Given competing F0 information from multiple talkers and lexical tones, contralateral acoustic hearing may be especially beneficial for Mandarin-speaking CI users' perception of competing speech.

Design: Bimodal benefit (CI+hearing aid - CI-only) was evaluated in 11 pediatric Mandarin-speaking Chinese CI users. In experiment 1, speech recognition thresholds (SRTs) were adaptively measured using a modified coordinated response measure test; subjects were required to correctly identify 2 keywords from among 10 choices in each category. SRTs were measured with CI-only or bimodal listening in the presence of steady state noise (SSN) or competing speech with the same (M+M) or different voice gender (M+F). Unaided thresholds in the non-CI ear and demographic factors were compared with speech performance. In experiment 2, SRTs were adaptively measured in SSN for recognition of 5 keywords, a more difficult listening task than the 2-keyword recognition task in experiment 1.

Results: In experiment 1, SRTs were significantly lower for SSN than for competing speech in both the CI-only and bimodal listening conditions. There was no significant difference between CI-only and bimodal listening for SSN and M+F (p > 0.05); SRTs were significantly lower for CI-only than for bimodal listening for M+M (p < 0.05), suggesting bimodal interference. Subjects were able to make use of voice gender differences for bimodal listening (p < 0.05) but not for CI-only listening (p > 0.05). Unaided thresholds in the non-CI ear were positively correlated with bimodal SRTs for M+M (p < 0.006) but not for SSN or M+F. No significant correlations were observed between any demographic variables and SRTs (p > 0.05 in all cases). In experiment 2, SRTs were significantly lower with two than with five keywords (p < 0.05). A significant bimodal benefit was observed only for the 5-keyword condition (p < 0.05).

Conclusions: With the CI alone, subjects experienced greater interference with competing speech than with SSN and were unable to use voice gender difference to segregate talkers. For the coordinated response measure task, subjects experienced no bimodal benefit and even bimodal interference when competing talkers were the same voice gender. A bimodal benefit in SSN was observed for the five-keyword condition but not for the two-keyword condition, suggesting that bimodal listening may be more beneficial as the difficulty of the listening task increased. The present data suggest that bimodal benefit may depend on the type of masker and/or the difficulty of the listening task.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Boxplots of unaided (left panel) and aided thresholds in the non-CI ear (right panel), as a function of audiometric frequency and averaged across audiometric frequencies. The boxes show the 25th and 75th percentiles, the horizontal line shows the median, the squares show the mean, the error bars show the 10th and 90th percentiles, and the circles show outliers (>90th percentile, <10th percentile). The thick solid line shows the mean thresholds across audiometric frequency.
Figure 2.
Figure 2.
Waveforms (left column), low-frequency amplitude and pitch contours (middle column), and CI electrodograms (right column) for an example target keyword (“8”) produced by the male target talker (1st and 4th rows), an example masker word (“4”) produced by a competing male (2nd row) or female talker (5th row), and mixed together at 0 dB TMR (3rd and 6th rows). The low-frequency amplitude and pitch contours were extracted after low-pass filtering to 500 Hz to simulate the available cues with contralateral acoustic hearing. The electrodograms were generated according to the default stimulation parameters used in Cochlear Corp. devices.
Figure 3.
Figure 3.
Boxplots of SRTs with CI-only or bimodal listening for the different masker conditions in Experiment 1. The boxes show the 25th and 75th percentiles, the horizontal line shows the median, the squares show the mean, the error bars show the 10th and 90th percentiles, and the circles show outliers (>90th percentile, <10th percentile). The brackets indicate significant differences between listening and/or masker conditions.
Figure 4.
Figure 4.
Left panel: Bimodal SRTs for Experiment 1 as a function of mean unaided thresholds in the non-CI ear averaged across audiometric frequencies between 125 and 2000 Hz. The circles, triangles, and squares show SRTs with the SSN, M+M, and M+F masker conditions, respectively; the open symbols show those outliers in terms of unaided thresholds (see Fig. 1). The solid lines show linear regressions fit to data after removing outliers; r and p values are shown in the legend. Right panel: Similar as left panel, but for the change in SRT (bimodal – CI-only) as a function of mean unaided thresholds in the non-CI ear; values >0 indicate bimodal interference, and values <0 indicate bimodal benefit.
Figure 5.
Figure 5.
Boxplots of SRTs in SSN with CI-only or bimodal listening for the different number of keywords in Experiment 2; note that the data for the 2-keyword condition are from Experiment 1 and are the same as for the SSN data shown in Figure 3. The boxes show the 25th and 75th percentiles, the horizontal line shows the median, the squares show the mean, the error bars show the 10th and 90th percentiles, and the circles show outliers (>90th percentile, <10th percentile). The brackets indicate significant differences between listening and/or masker conditions.

Similar articles

Cited by

References

    1. Bolia RS, Nelson WT, Ericson MA, et al. (2000). A speech corpus for multitalker communications research. J Acoust Soc Am. 107, 1065–1066. - PubMed
    1. Blamey PJ, Maat B, Başkent D, et al. (2015). A retrospective multicenter study comparing speech perception outcomes for bilateral implantation and bimodal rehabilitation. Ear Hear. 36, 408–416. - PubMed
    1. Brown CA, Bacon SP (2009). Achieving electric-acoustic benefit with a modulated tone. Ear Hear. 30(5), 489–493. - PMC - PubMed
    1. Brungart D (2001a). Evaluation of speech intelligibility with the coordinate response measure. J Acoust Soc Am. 109, 2276–2279. - PubMed
    1. Brungart D (2001b). Informational and energetic masking effects in the perception of two simultaneous talkers. J Acoust Soc Am. 109, 1101–1109. - PubMed

Publication types