Advances in brain-computer interface for decoding speech imagery from EEG signals: a systematic review
- PMID: 39712121
- PMCID: PMC11655741
- DOI: 10.1007/s11571-024-10167-0
Advances in brain-computer interface for decoding speech imagery from EEG signals: a systematic review
Abstract
Numerous individuals encounter challenges in verbal communication due to various factors, including physical disabilities, neurological disorders, and strokes. In response to this pressing need, technology has actively pursued solutions to bridge the communication gap, recognizing the inherent difficulties faced in verbal communication, particularly in contexts where traditional methods may be inadequate. Electroencephalogram (EEG) has emerged as a primary non-invasive method for measuring brain activity, offering valuable insights from a cognitive neurodevelopmental perspective. It forms the basis for Brain-Computer Interfaces (BCIs) that provide a communication channel for individuals with neurological impairments, thereby empowering them to express themselves effectively. EEG-based BCIs, especially those adapted to decode imagined speech from EEG signals, represent a significant advancement in enabling individuals with speech disabilities to communicate through text or synthesized speech. By utilizing cognitive neurodevelopmental insights, researchers have been able to develop innovative approaches for interpreting EEG signals and translating them into meaningful communication outputs. To aid researchers in effectively addressing this complex challenge, this review article synthesizes key findings from state-of-the-art significant studies. It investigates into the methodologies employed by various researchers, including preprocessing techniques, feature extraction methods, and classification algorithms utilizing Deep Learning and Machine Learning approaches and their integration. Furthermore, the review outlines the potential avenues for future research, with the goal of advancing the practical implementation of EEG-based BCI systems for decoding imagined speech from a cognitive neurodevelopmental perspective.
Keywords: Brain computer interface (BCI); Deep learning; Electroencephalography (EEG); Imagined speech; Machine learning; Speech imagery.
© The Author(s), under exclusive licence to Springer Nature B.V. 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Conflict of interest statement
Conflict of interestThe authors declare no potential Conflict of interest.
Figures
References
-
- Abibullaev B, Keutayeva A, Zollanvari A (2023) Deep learning in EEG-based bcis: a comprehensive review of transformer models, advantages, challenges, and applications. IEEE Access
-
- Agarwal P, Kumar S (2022) Electroencephalography based imagined alphabets classification using spatial and time-domain features. Int J Imaging Syst Technol 32(1):111–122 - DOI
-
- Alizadeh D, Omranpour H (2023) Em-csp: an efficient multiclass common spatial pattern feature method for speech imagery EEG signals recognition. Biomed Signal Process Control 84:104933 - DOI
-
- Ali S, Mumtaz W, Maqsood A (2023) EEG based thought-to-text translation via deep learning. In: 2023 7th international multi-topic ICT conference (IMTIC). IEEE, pp 1–8
Publication types
LinkOut - more resources
Full Text Sources
Miscellaneous
