Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Dec 1;59(6):1505-1519.
doi: 10.1044/2016_JSLHR-H-15-0312.

Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants

Affiliations

Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants

Michael F Dorman et al. J Speech Lang Hear Res. .

Abstract

Purpose: Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs).

Method: Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users.

Results: (a) Most CI users report that most of the time, they have access to both A and V information when listening to speech. (b) CI users did not achieve better scores on a task of speechreading than did listeners with normal hearing. (c) Sentences that are easy to speechread provided 12 percentage points more gain to speech understanding than did sentences that were difficult. (d) Ease of speechreading for sentences is related to phrase familiarity. (e) Users of bimodal CIs benefit from low-frequency acoustic hearing even when V cues are available, and a second CI adds to the benefit of a single CI when V cues are available. (f) V information facilitates lexical segmentation by improving the recognition of the number of syllables produced and the relative strength of these syllables.

Conclusions: Our data are consistent with the view that V information improves CI users' ability to identify syllables in the acoustic stream and to recognize their relative juxtaposed strengths. Enhanced syllable resolution allows better identification of word onsets, which, when combined with place-of-articulation information from visible consonants, improves lexical access.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Survey results for questions relating to listening environments encountered by CI users.
Figure 2.
Figure 2.
Percent correct word recognition in sentences as a function of type of test material in a vision-only condition. Sample size is indicated on each histogram. Open histograms = performance of listeners with normal hearing; gray histogram = performance of CI users; error bars = ±1 SEM.
Figure 3.
Figure 3.
Percent correct word recognition as a function of test condition. The parameter is the speechreading difficulty of the material. CI = cochlear implant; V = vision; error bars = ±1 SEM.
Figure 4.
Figure 4.
(a) The summed number of occurrences of the consonants /m/, /b/, /p/, /w/, /s/, /ʃ/, /θ/, /ð/, /f/, and /v/ in sentence lists that vary in speechreading difficulty from easy (List 1) to difficult (List 12). (b) The estimated median, and (c) minimum occurrences of 3-grams (three-word sequences) in the Kopra lists (referenced to Google Ngrams) as a function of list difficulty are shown by the open squares and dotted line. Speechreading accuracy, reproduced from Figure 2, is shown by the filled circles and solid line.
Figure 5.
Figure 5.
Percent correct word recognition as a function of test condition for users of bilateral CIs. CI = cochlear implant; V = vision; error bars = ±1 SEM.
Figure 6.
Figure 6.
Percent correct word recognition in sentences as a function of test condition for CI users who had a low-frequency acoustic hearing aid (HA) in the contralateral ear. (a) Performance of four subjects who did not benefit from the HA when visual information was available. (b) Performance of six subjects who did benefit from the HA when visual information was available. (c) Performance of seven subjects who did not benefit from the HA when it was added to the CI but did when visual information was available. CI = cochlear implant; HA = low-frequency acoustic hearing aid; V = vision; error bars = ±1 SEM.
Figure 7.
Figure 7.
(a) Percent correct word recognition in CI and CI+V test conditions for phrases with low interword probabilities. (b) Percent total consonant errors in CI and CI+V test conditions for place, manner, and voicing. (c) Number of lexical-boundary errors in CI and CI+V test conditions. (d) Number of syllable-insertion and -deletion errors in CI and CI+V test conditions. (e) IS/IW and DS/DW ratios for CI-only and CI+V conditions. CI = cochlear implant; V = vision; IS = insertion before a strong syllable; IW = insertion before a weak syllable; DS = deletion before a strong syllable; DW = deletion before a weak syllable.
Figure 8.
Figure 8.
Overall results from subjects tested in A and AV conditions in Experiments 2b and 4. (a) Mean percent correct in A and AV conditions. Error bars = ±1 SD. (b) Percentage-point change from visual information. (c) Percentage-point change in AV condition as a function of A score.

References

    1. Altieri N. A., Pisoni D. B., & Townsend J. T. (2011). Some normative data on lip-reading skills (L). The Journal of the Acoustical Society of America, 130, 1–4. - PMC - PubMed
    1. Auer E. T. (2002). The influence of the lexicon on speech read word recognition: Contrasting segmental and lexical distinctiveness. Psychonomic Bulletin & Review, 9, 341–347. - PubMed
    1. Auer E. T. (2009). Spoken word recognition by eye. Scandinavian Journal of Psychology, 50, 419–425. - PMC - PubMed
    1. Bergeson T. R., Pisoni D. B., Reese L., & Kirk K. I. (2003, February). Audiovisual speech perception in adult cochlear implant users: Effects of sudden vs. progressive hearing loss. Poster presented at the Annual MidWinter Research Meeting of the Association for Research in Otolaryngology, Daytona Beach, FL.
    1. Bird S., Klein E., & Loper E. (2009). Natural language processing with Python. Sebastopol, CA: O'Reilly Media.

LinkOut - more resources