Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jul-Aug;43(4):1262-1272.
doi: 10.1097/AUD.0000000000001179. Epub 2021 Dec 8.

The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors

Affiliations

The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors

Stephen R Dennison et al. Ear Hear. 2022 Jul-Aug.

Abstract

Objectives: Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance.

Design: Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues.

Results: There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21).

Conclusions: Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.

PubMed Disclaimer

Conflict of interest statement

The authors have no conflicts of interest to disclose.

Figures

Figure 1:
Figure 1:
Example stimulation cycles for different kinds of synchronization. Left pulses are blue; right pulses are red. (a) Synchronized pulses, (b) Constant offset, (c) Randomly jittered pulses.
Figure 2:
Figure 2:
Simulated metrics of synchronization. By varying the simulated number of N maxima, which controls the maximum amount of random displacement, different average offsets and coherence are observed. For this model, it is assumed that the pulse timing in each ear is an independent and identically distributed (i.i.d.) uniform random variable that takes any value on the interval 1.1 ms (1/900 pulses per second per channel) with equal probability (i.e.X ~ u[0, α]). For example, if a = 1/900, the stimulation period of a processor set to 900 pps, then the expectation of the triangular random variable is 370 μs. Processor outputs were simulated as five sequences of 900 pulses at five different electrodes. Each pulse time was selected from uniform distributions with lower bounds of 0 ms and upper bounds ranging from 0 to 1.1 ms. Both average offset (shown in Fig. 2(a)) and IC (shown in Fig. 2(b)) were calculated for these simulations as a function of “N maxima” (i.e. discrete random variables X were calculated using X~u[0,N1]1N1900, with N varying from 1 to 8).
Figure 3:
Figure 3:
Measured metrics of synchronization. Measurements were conducted in a sound booth (2.82 × 3.04 × 2.01 m) (Acoustic Systems, Austin, TX). Processor earpieces were placed side-by-side at a height of 97 cm and 80 cm from a loudspeaker (Tannoy Reveal 402, Coatbridge, Scotland). A constant amplitude sinusoid was presented at 60 dB SPL from the loudspeaker and matched the center frequency of the target electrode channel. Voltage outputs from the electrode pair were recorded using an “implant in a box” (CI24RE implants) provided by Cochlear, Ltd. The implant was attached to a bank of 8.2 kOhm resistors and voltage measured using a National Instruments (Austin, TX) data acquisition card (NI USB-6343). Threshold and most comfortable levels in the MAP were set to 100 and 200 current units, respectively. The pulse rate was set at 900 pulses per second and the number of maxima for ACE processing was set to 8.
Figure 4:
Figure 4:
Examples of pulsatile outputs recorded from a matched pair of left and right electrodes from various cochlear implant processors. Each example was recorded from electrode number 12. Recordings show the output from a National Instruments (Austin, TC) data acquisition card (NI USB-6343).
Figure 5:
Figure 5:
Interaural coherence (IC) as a function of spatial location and SNR for simulated spatialized sounds. The sounds used for the simulation are the same as that used in the psychophysical listening test, and were spatialized using head-related impulse responses measured in the ear canal of a KEMAR (Knowles Electronics Manikin for Acoustic Research, G.R.A.S Sound & Vibration, Holte, Denmark) manikin in the same room as that used for experimental testing. ACE processing was simulated using a MATLAB implementation developed for the CCi-MOBILE. MAPs were programmed with 22 active electrodes, standard FATs, and T and C levels of 100 and 150 CUs, respectively. Five repetitions of the stimulus were simulated per loudspeaker location, and IC was calculated for each electrode channel and averaged over repetitions. Unsynchronized processor output was simulated as in the Introduction, with a random displacement of pulse timing of up to 555 μs. Top row shows the IC for simulated synchronized hardware: a) localization stimuli, b) co-located speech stimuli, and c) spatially separated speech stimuli. Bottom row shows the IC for simulated unsynchronized hardware: d) localization stimuli, e) co-located speech stimuli, and f) spatially separated speech stimuli.
Figure 6:
Figure 6:
Comparing root mean square (RMS) error in sound localization for listeners using unsynchronized processors (their clinical devices) vs. synchronized hardware (the ciPDA research processor). Lower error scores are better; listeners whose markers fall in the shaded region performed better with synchronized hardware. Error bars indicate group mean and standard deviation.
Figure 7:
Figure 7:
Average SRT scores for unsynchronized (clinical) and synchronized (ciPDA) hardware in both symmetric and co-located configurations. Error bars represent standard deviation. All conditions are significantly different at p < 0.05 except for unsynchronized, co-located and synchronized, symmetric. Data plotted here are used to calculate spatial release from masking.
Figure 8:
Figure 8:
Comparing SRM for unsynchronized (clinical processors) and synchronized (ciPDA) conditions. Error bars indicate group mean and standard deviation. Higher scores indicate more release from masking. Shaded region indicates better SRM with the synchronized stimulation. Listeners outside the dashed region demonstrated a clinically-relevant improvement in SRM.

References

    1. Ali H, Lobo AP, and Loizou PC (2013). “Design and Evaluation of a Personal Digital Assistant-based Research Platform for Cochlear Implants,” IEEE Trans. Biomed. Eng, 60, 3060–3073. doi:10.1109/TBME.2013.2262712 - DOI - PMC - PubMed
    1. Archer-Boyd AW, and Carlyon RP (2019). “Simulations of the effect of unlinked cochlear-implant automatic gain control and head movement on interaural level differences,” J. Acoust. Soc. Am, 145, 1389–1400. doi:10.1121/1.5093623 - DOI - PMC - PubMed
    1. Aronoff JM, Yoon YS, Freed DJ, Vermiglio AJ, Pal I, and Soli SD (2010). “The use of interaural time and level difference cues by bilateral cochlear implant users,” J. Acoust. Soc. Am, 127, EL87–EL92. doi:10.1121/1.3298451 - DOI - PMC - PubMed
    1. Dieudonné B, Van Wilderode M, and Francart T (2020). “Temporal quantization deteriorates the discrimination of interaural time differences,” J. Acoust. Soc. Am, 148, 815–828. doi:10.1121/10.0001759 - DOI - PubMed
    1. Dorman MF, Loiselle L, Stohl J, Yost WA, Spahr A, Brown C, and Cook S (2014). “Interaural level differences and sound source localization for bilateral cochlear implant patients.,” Ear Hear, 35, 633–640. doi:10.1097/AUD.0000000000000057 - DOI - PMC - PubMed