Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network
- PMID: 32768036
- DOI: 10.1016/j.compbiomed.2020.103927
Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network
Abstract
In recent years, deep learning (DL) techniques, and in particular convolutional neural networks (CNNs), have shown great potential in electroencephalograph (EEG)-based emotion recognition. However, existing CNN-based EEG emotion recognition methods usually require a relatively complex stage of feature pre-extraction. More importantly, the CNNs cannot well characterize the intrinsic relationship among the different channels of EEG signals, which is essentially a crucial clue for the recognition of emotion. In this paper, we propose an effective multi-level features guided capsule network (MLF-CapsNet) for multi-channel EEG-based emotion recognition to overcome these issues. The MLF-CapsNet is an end-to-end framework, which can simultaneously extract features from the raw EEG signals and determine the emotional states. Compared with original CapsNet, it incorporates multi-level feature maps learned by different layers in forming the primary capsules so that the capability of feature representation can be enhanced. In addition, it uses a bottleneck layer to reduce the amount of parameters and accelerate the speed of calculation. Our method achieves the average accuracy of 97.97%, 98.31% and 98.32% on valence, arousal and dominance of DEAP dataset, respectively, and 94.59%, 95.26% and 95.13% on valence, arousal and dominance of DREAMER dataset, respectively. These results show that our method exhibits higher accuracy than the state-of-the-art methods.
Keywords: Capsule network; Deep learning; Electroencephalogram (EEG); Emotion recognition.
Copyright © 2020 Elsevier Ltd. All rights reserved.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
