Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Nov 23:15:692183.
doi: 10.3389/fnbot.2021.692183. eCollection 2021.

Evaluating Convolutional Neural Networks as a Method of EEG-EMG Fusion

Affiliations

Evaluating Convolutional Neural Networks as a Method of EEG-EMG Fusion

Jacob Tryon et al. Front Neurorobot. .

Abstract

Wearable robotic exoskeletons have emerged as an exciting new treatment tool for disorders affecting mobility; however, the human-machine interface, used by the patient for device control, requires further improvement before robotic assistance and rehabilitation can be widely adopted. One method, made possible through advancements in machine learning technology, is the use of bioelectrical signals, such as electroencephalography (EEG) and electromyography (EMG), to classify the user's actions and intentions. While classification using these signals has been demonstrated for many relevant control tasks, such as motion intention detection and gesture recognition, challenges in decoding the bioelectrical signals have caused researchers to seek methods for improving the accuracy of these models. One such method is the use of EEG-EMG fusion, creating a classification model that decodes information from both EEG and EMG signals simultaneously to increase the amount of available information. So far, EEG-EMG fusion has been implemented using traditional machine learning methods that rely on manual feature extraction; however, new machine learning methods have emerged that can automatically extract relevant information from a dataset, which may prove beneficial during EEG-EMG fusion. In this study, Convolutional Neural Network (CNN) models were developed using combined EEG-EMG inputs to determine if they have potential as a method of EEG-EMG fusion that automatically extracts relevant information from both signals simultaneously. EEG and EMG signals were recorded during elbow flexion-extension and used to develop CNN models based on time-frequency (spectrogram) and time (filtered signal) domain image inputs. The results show a mean accuracy of 80.51 ± 8.07% for a three-class output (33.33% chance level), with an F-score of 80.74%, using time-frequency domain-based models. This work demonstrates the viability of CNNs as a new method of EEG-EMG fusion and evaluates different signal representations to determine the best implementation of a combined EEG-EMG CNN. It leverages modern machine learning methods to advance EEG-EMG fusion, which will ultimately lead to improvements in the usability of wearable robotic exoskeletons.

Keywords: EEG signals; EMG signals; convolutional neural networks; human-machine interfaces; sensor fusion.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
The protocol followed to process the EEG/EMG signals, generate the spectrogram and signal images, and train the CNN models using different EEG–EMG fusion methods. The top path (purple) shows the steps used to develop the CNN models based on spectrogram image inputs, while the bottom path (green) shows the steps used to develop the CNN models based on signal image inputs. For all EEG–EMG-fusion-based CNN model types (represented by the final step of all paths), an EEG and EMG only version was also trained, to provide a baseline comparison for evaluating EEG–EMG Fusion.
Figure 2
Figure 2
A sample normalized spectrogram image to demonstrate the three EEG–EMG fusion methods used, where (A,B) show single-channel spectrograms and (C) visualizes a multi-channel spectrogram. (A) Shows the grouped method, where signal channels of the same type are grouped together within the image. (B) Shows the mixed method, where EEG and EMG channels are alternated to mix signal types. (C) Provides a visualization of the stacked method, where a multi-channel spectrogram is generated by combining the different EEG/EMG spectrograms in depth-wise manner.
Figure 3
Figure 3
A graphical representation of a sample normalized signal image. The image height contains 5 rows, one for each signal channel, and the image width is dictated by the number of samples in each 250 ms window (1,000 samples at the 4,000 Hz sampling rate).
Figure 4
Figure 4
Example normalized spectrogram images and graphical representations of sample normalized signal images for each of the three weight levels, showing the qualitative variations in the images as task weight changes. During different task weights, the distribution of frequency magnitudes across time/channels is different in the spectrogram images and the shape of the time domain signal varies in the signal images. The columns each represent a different task weight level (described by the label above), with the rows being a matched spectrogram and signal image taken from the same time window. The spectrograms shown use the grouped fusion method to arrange the channels. The images shown follow the same labeling convention as the sample images shown in Figures 2, 3, excluded here to avoid clutter.
Figure 5
Figure 5
The base model configuration used for all three spectrogram CNN model types. All spectrogram model types used three convolution layers, followed by two FC layers and an output FC layer to perform the final classification. Each convolution layer had three sub-layer steps (convolution, max pooling, and dropout) and each FC layer had two sub-layer steps (the FC step followed by dropout). Note, that repeated layers only show the sub-layers for the first layer, to reduce redundancy and condense the diagram.
Figure 6
Figure 6
The base model configurations used for the (A) split convolution and (B) 1D convolution models. Visual representations of the differences between both convolution types are shown in the expanded view below each diagram, detailing the changes in kernel size used to facilitate both types of convolution. Split convolution used one split convolution layer comprised of temporal and spatial convolution sub-layers, followed by a max pooling and dropout sub-layer. 1D convolution used three convolution layers, each with three sub-layer steps (convolution, max pooling, and dropout). All signal model types followed convolution with two FC layers (containing two sub-layer steps: the FC step followed by dropout) and an output FC layer to perform the final classification. Note, that repeated layers only show the sub-layers for the first layer, to reduce redundancy and condense the diagram.
Figure 7
Figure 7
The mean accuracy of all (A) spectrogram and (B) signal based CNN models, calculated across both speeds and all task weights. Error bars represent one standard deviation. Note that the y axis begins at 30% (chance level for these models is 33.33%).
Figure 8
Figure 8
The mean accuracy for all CNN models, separated by the two speed levels (fast and slow). Models of the same type are grouped together, with the order of the groups from left to right as follows: single-channel spectrogram models, multi-channel spectrogram models, split convolution signal models, and 1D convolution signal models. Error bars represent ± one standard deviation.
Figure 9
Figure 9
Confusion matrices, using the combined classification results for all subjects, for the single-channel spectrogram-based CNN models. (A) Shows the matrix for the grouped fusion method while (B) shows the matrix for the mixed fusion method. (C,D) Show the matrices for the EEG and EMG only models, respectively. Each matrix contains a positive/negative precision score summary in the final two rows, and a positive/negative recall score summary in the final two columns.
Figure 10
Figure 10
Confusion matrices, using the combined classification results for all subjects, for the multi-channel spectrogram-based CNN models. (A) Shows the matrix for the stacked fusion method, while (B,C) how the matrices for the EEG and EMG only models, respectively. Each matrix contains a positive/negative precision score summary in the final two rows, and a positive/negative recall score summary in the final two columns.
Figure 11
Figure 11
Confusion matrices, using the combined classification results for all subjects, for the split convolution signal-image-based CNN models. (A) Shows the matrix for the EEG–EMG fusion model, while (B,C) how the matrices for the EEG and EMG only models, respectively. Each matrix contains a positive/negative precision score summary in the final two rows, and a positive/negative recall score summary in the final two columns.
Figure 12
Figure 12
Confusion matrices, using the combined classification results for all subjects, for the 1D convolution signal-image-based CNN models. (A) Shows the matrix for the EEG–EMG fusion model, while (B,C) how the matrices for the EEG and EMG only models, respectively. Each matrix contains a positive/negative precision score summary in the final two rows, and a positive/negative recall score summary in the final two columns.

References

    1. Ameri A., Akhaee M. A., Scheme E., Englehart K. (2018). Real-time, simultaneous myoelectric control using a convolutional neural network. PLoS ONE 13:e0203835. 10.1371/journal.pone.0203835 - DOI - PMC - PubMed
    1. Amin S. U., Alsulaiman M., Muhammad G., Mekhtiche M. A., Shamim Hossain M. (2019). Deep learning for EEG motor imagery classification based on multi-layer CNNs feature fusion. Future Generat. Comput. Syst. 101, 542–554. 10.1016/j.future.2019.06.027 - DOI
    1. Atzori M., Cognolato M., Müller H. (2016). Deep learning with convolutional neural networks applied to electromyography data: a resource for the classification of movements for prosthetic hands. Front. Neurorob. 10:9. 10.3389/fnbot.2016.00009 - DOI - PMC - PubMed
    1. Banluesombatkul N., Ouppaphan P., Leelaarporn P., Lakhan P., Chaitusaney B., Jaimchariyatam N., et al. . (2021). MetaSleepLearner: A pilot study on fast adaptation of bio-signals-based sleep stage classifier to new individual subject using meta-learning. IEEE J. Biomed. Health Inform. 25, 1949–1963. 10.1109/JBHI.2020.3037693 - DOI - PubMed
    1. Bird J. J., Kobylarz J., Faria D. R., Ekart A., Ribeiro E. P. (2020). Cross-domain MLP and CNN transfer learning for biological signal processing: EEG and EMG. IEEE Access 8, 54789–54801. 10.1109/ACCESS.2020.2979074 - DOI - PubMed

LinkOut - more resources