TCANet: a temporal convolutional attention network for motor imagery EEG decoding
- PMID: 40524963
- PMCID: PMC12167204
- DOI: 10.1007/s11571-025-10275-5
TCANet: a temporal convolutional attention network for motor imagery EEG decoding
Abstract
Decoding motor imagery electroencephalogram (MI-EEG) signals is fundamental to the development of brain-computer interface (BCI) systems. However, robust decoding remains a challenge due to the inherent complexity and variability of MI-EEG signals. This study proposes the Temporal Convolutional Attention Network (TCANet), a novel end-to-end model that hierarchically captures spatiotemporal dependencies by progressively integrating local, fused, and global features. Specifically, TCANet employs a multi-scale convolutional module to extract local spatiotemporal representations across multiple temporal resolutions. A temporal convolutional module then fuses and compresses these multi-scale features while modeling both short- and long-term dependencies. Subsequently, a stacked multi-head self-attention mechanism refines the global representations, followed by a fully connected layer that performs MI-EEG classification. The proposed model was systematically evaluated on the BCI IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings. In subject-dependent classification, TCANet achieved accuracies of 83.06% and 88.52% on BCI IV-2a and IV-2b respectively, with corresponding Kappa values of 0.7742 and 0.7703, outperforming multiple representative baselines. In the more challenging subject-independent setting, TCANet achieved competitive performance on IV-2a and demonstrated potential for improvement on IV-2b. The code is available at https://github.com/snailpt/TCANet.
Keywords: Brain-computer interface (BCI); Deep learning (DL); Motor imagery (MI); Self-attention; Temporal convolutional network (TCN).
© The Author(s), under exclusive licence to Springer Nature B.V. 2025. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Conflict of interest statement
Conflict of interestThe authors declare no competing interests.
References
-
- Altaheri H, Muhammad G, Alsulaiman M (2023) Physics-informed attention temporal convolutional network for EEG-based motor imagery classification. IEEE Trans Ind Inform 19(2):2249–2258 - DOI
-
- Amin SU, Alsulaiman M, Muhammad G, Mekhtiche MA, Shamim Hossain M (2019) Deep learning for EEG motor imagery classifcation based on multi-layer CNNs feature fusion. Fut Gener Comput Syst 101:542–554 - DOI
-
- Ang KK, Chin ZY, Zhang H, and Guan C (2008) Filter bank common spatial pattern (FBCSP) in brain-computer interface. In: Proceedings IEEE International joint conference. Neural Networks (IEEE world congress on computational intelligence), Hong Kong, China, pp 2390–2397
-
- Bai S, Kolter JZ, and Koltun V (2018) An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv. 1803.01271
LinkOut - more resources
Full Text Sources
