Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study
- PMID: 33994978
- PMCID: PMC8113416
- DOI: 10.3389/fnhum.2021.636191
Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study
Abstract
This study aims to decode the hemodynamic responses (HRs) evoked by multiple sound-categories using functional near-infrared spectroscopy (fNIRS). The six different sounds were given as stimuli (English, non-English, annoying, nature, music, and gunshot). The oxy-hemoglobin (HbO) concentration changes are measured in both hemispheres of the auditory cortex while 18 healthy subjects listen to 10-s blocks of six sound-categories. Long short-term memory (LSTM) networks were used as a classifier. The classification accuracy was 20.38 ± 4.63% with six class classification. Though LSTM networks' performance was a little higher than chance levels, it is noteworthy that we could classify the data subject-wise without feature selections.
Keywords: auditory cortex; decoding; deep learning; functional near-infrared spectroscopy (fNIRS); long short-term memories (LSTMs).
Copyright © 2021 Yoo, Santosa, Kim and Hong.
Conflict of interest statement
The authors declare that they have no conflict of interest. This research was conducted in the absence of any commercial or financial relationship that could be construed as a potential conflict of interest.
Figures
References
LinkOut - more resources
Full Text Sources
Other Literature Sources
