Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Sep 29:194:108180.
doi: 10.1016/j.neunet.2025.108180. Online ahead of print.

A multi-level teacher assistant-based knowledge distillation framework with dynamic feedback for motor imagery EEG decoding

Affiliations

A multi-level teacher assistant-based knowledge distillation framework with dynamic feedback for motor imagery EEG decoding

Jinzhou Wu et al. Neural Netw. .

Abstract

Deep learning has shown promise in motor imagery-based electroencephalogram (MI-EEG) decoding, a critical task in non-invasive brain-computer interfaces (BCIs). In response to the computational complexity of deep learning models to be deployed in practical BCI applications, knowledge distillation (KD) has emerged as a solution for model compression. However, vanilla KD methods struggle to effectively extract and transfer the abundant multi-level knowledge from MI-EEG signals under high compression ratios. This study proposes a novel knowledge distillation framework termed Motor Imagery Knowledge Distillation (MIKD), which compresses deep learning models for MI classification tasks while maintaining high performance. The MIKD framework consists of two key modules: (1) a multi-level teacher assistant knowledge distillation (ML-TAKD) module designed to extract and transfer local representations and global dependencies of MI-EEG signals from the complex teacher network to the much smaller student network, and (2) a dynamic feedback module that allows the teacher assistant to adjust its teaching strategy based on the student's learning progress. Extensive experiments on three public EEG datasets demonstrate that the MIKD framework achieves state-of-the-art performance. The proposed framework improves the baseline student model's accuracy by 6.61 %, 1.91 %, and 3.29 % on the three datasets, while reducing the model size by nearly 90 %.

Keywords: Brain-computer interface (BCI); Electroencephalogram (EEG); Knowledge distillation (KD); Motor imagery (MI).

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.