Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning
- PMID: 32765215
- PMCID: PMC7378401
- DOI: 10.3389/fnins.2020.00748
Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning
Abstract
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
Keywords: aging; event-related potentials; hearing loss; machine learning; speech perception; stability selection and control; support vector machine.
Copyright © 2020 Mahmud, Ahmed, Al-Fahad, Moinuddin, Yeasin, Alain and Bidelman.
Figures







Similar articles
-
Data-driven machine learning models for decoding speech categorization from evoked brain responses.J Neural Eng. 2021 Mar 23;18(4):10.1088/1741-2552/abecf0. doi: 10.1088/1741-2552/abecf0. J Neural Eng. 2021. PMID: 33690177 Free PMC article.
-
Age-related hearing loss increases full-brain connectivity while reversing directed signaling within the dorsal-ventral pathway for speech.Brain Struct Funct. 2019 Nov;224(8):2661-2676. doi: 10.1007/s00429-019-01922-9. Epub 2019 Jul 25. Brain Struct Funct. 2019. PMID: 31346715 Free PMC article.
-
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.Brain Struct Funct. 2018 Jan;223(1):145-163. doi: 10.1007/s00429-017-1477-0. Epub 2017 Jul 22. Brain Struct Funct. 2018. PMID: 28735495
-
Cortical-brainstem interplay during speech perception in older adults with and without hearing loss.Front Neurosci. 2023 Feb 2;17:1075368. doi: 10.3389/fnins.2023.1075368. eCollection 2023. Front Neurosci. 2023. PMID: 36816123 Free PMC article.
-
Speech-in-noise representation in the aging midbrain and cortex: Effects of hearing loss.PLoS One. 2019 Mar 13;14(3):e0213899. doi: 10.1371/journal.pone.0213899. eCollection 2019. PLoS One. 2019. PMID: 30865718 Free PMC article.
Cited by
-
High-Frequency Transcranial Random Noise Stimulation Modulates Gamma-Band EEG Source-Based Large-Scale Functional Network Connectivity in Patients with Schizophrenia: A Randomized, Double-Blind, Sham-Controlled Clinical Trial.J Pers Med. 2022 Sep 30;12(10):1617. doi: 10.3390/jpm12101617. J Pers Med. 2022. PMID: 36294755 Free PMC article.
-
Data-driven machine learning models for decoding speech categorization from evoked brain responses.J Neural Eng. 2021 Mar 23;18(4):10.1088/1741-2552/abecf0. doi: 10.1088/1741-2552/abecf0. J Neural Eng. 2021. PMID: 33690177 Free PMC article.
-
Online Left-Hemispheric In-Phase Frontoparietal Theta tACS Modulates Theta-Band EEG Source-Based Large-Scale Functional Network Connectivity in Patients with Schizophrenia: A Randomized, Double-Blind, Sham-Controlled Clinical Trial.Biomedicines. 2023 Feb 20;11(2):630. doi: 10.3390/biomedicines11020630. Biomedicines. 2023. PMID: 36831167 Free PMC article.
-
Machine Learning-Based Prediction of the Outcomes of Cochlear Implantation in Patients With Cochlear Nerve Deficiency and Normal Cochlea: A 2-Year Follow-Up of 70 Children.Front Neurosci. 2022 Jun 23;16:895560. doi: 10.3389/fnins.2022.895560. eCollection 2022. Front Neurosci. 2022. PMID: 35812216 Free PMC article.
-
Subcortical rather than cortical sources of the frequency-following response (FFR) relate to speech-in-noise perception in normal-hearing listeners.Neurosci Lett. 2021 Feb 16;746:135664. doi: 10.1016/j.neulet.2021.135664. Epub 2021 Jan 23. Neurosci Lett. 2021. PMID: 33497718 Free PMC article.
References
Grants and funding
LinkOut - more resources
Full Text Sources