Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 27;25(15):4657.
doi: 10.3390/s25154657.

BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding

Affiliations

BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding

Muhammad Zulkifal Aziz et al. Sensors (Basel). .

Abstract

Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications.

Keywords: biomedical signal processing; computer-aided diagnosis; electroencephalography (EEG); motor imagery.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Detailed structure of BCINetV1 model with its constituent T-CAB, S-CAB, ConvSAT, and SEB blocks. Throughout the figure, the notation k @ m×n is used to specify a convolutional layer with k kernels of size m×n.
Figure 2
Figure 2
Five-fold classification performance evaluation of the BCINetV1 model, depicting (a) accuracy, (b) recall, (c) f-score, and (d) kappa across five subjects (AA, AL, AV, AW, and AY). The bar heights denote the mean outcomes while the sides whiskers represent the standard deviation.
Figure 3
Figure 3
Illustration of classification accuracy for five subjects from Dataset 1 using different channel combinations. The x-axis indicates the number of electrode channels used for training (3, 8, 18, and 118 channels), while the y-axis shows the corresponding test accuracy. The channel combinations in the figure legend illustrate the channels used as the test data.
Figure 4
Figure 4
Statistical quantification of BCINetV1 for Dataset 1.
Figure 5
Figure 5
(a) Pairwise statistical significance (p-values) of features using ANOVA test. (b) Histogram of ANOVA p-values.
Figure 6
Figure 6
Topological maps illustrating the model activations in different regions corresponding to the imagined activity. These maps represent the average response from the SEB module, calculated over the entire 3.5 s trial duration for each task. Dark red indicates elevated activity, while the blue shades represent decreased activations.
Figure 7
Figure 7
BCINetV1 2D t-SNE embeddings for feature separability in Datasets 1–4.
Figure 8
Figure 8
Illustration of the working mechanism of T-CAB, S-CAB, and SEB modules in BCINetV1.
Figure 9
Figure 9
Variations in computational durations by altering the (a) time stamps; (b) frequency of the input MI EEG signals.
Figure 10
Figure 10
Multi-dimensional comparison of model performance on Dataset 1. Classification accuracy (y-axis) is plotted against the number of trainable parameters (x-axis, log scale). The size of each bubble denotes the inference time of the model. Larger bubble means higher inference time and vice versa. * denotes the highest classification accuracy model.
Figure 11
Figure 11
Impact of varying model hyperparameters (a) learning rate, (b) number of epochs (c), and optimizers over the classification outcomes.

Similar articles

References

    1. World Health Organization Over 1 in 3 People Affected by Neurological Conditions: The Leading Cause of Illness and Disability Worldwide. WHO News. 2024. [(accessed on 9 May 2025)]. Available online: https://www.who.int/news/item/14-03-2024-over-1-in-3-people-affected-by-....
    1. Daly J.J., Huggins J.E. Brain-computer interface: Current and emerging rehabilitation applications. Arch. Phys. Med. Rehabil. 2015;96:S1–S7. doi: 10.1016/j.apmr.2015.01.007. - DOI - PMC - PubMed
    1. Yu X., Aziz M.Z., Sadiq M.T., Jia K., Fan Z., Xiao G. Computerized multidomain EEG classification system: A new paradigm. IEEE J. Biomed. Health Inform. 2022;26:3626–3637. doi: 10.1109/JBHI.2022.3151570. - DOI - PubMed
    1. Lee M.H., Kwon O.Y., Kim Y.J., Kim H.K., Lee Y.E., Williamson J., Fazli S., Lee S.W. EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience. 2019;8:giz002. doi: 10.1093/gigascience/giz002. - DOI - PMC - PubMed
    1. Chaudhary U., Birbaumer N., Ramos-Murguialday A. Brain–computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 2016;12:513–525. doi: 10.1038/nrneurol.2016.113. - DOI - PubMed

LinkOut - more resources