Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition
- PMID: 32998382
- PMCID: PMC7583996
- DOI: 10.3390/s20195559
Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition
Abstract
Speech emotion recognition (SER) classifies emotions using low-level features or a spectrogram of an utterance. When SER methods are trained and tested using different datasets, they have shown performance reduction. Cross-corpus SER research identifies speech emotion using different corpora and languages. Recent cross-corpus SER research has been conducted to improve generalization. To improve the cross-corpus SER performance, we pretrained the log-mel spectrograms of the source dataset using our designed visual attention convolutional neural network (VACNN), which has a 2D CNN base model with channel- and spatial-wise visual attention modules. To train the target dataset, we extracted the feature vector using a bag of visual words (BOVW) to assist the fine-tuned model. Because visual words represent local features in the image, the BOVW helps VACNN to learn global and local features in the log-mel spectrogram by constructing a frequency histogram of visual words. The proposed method shows an overall accuracy of 83.33%, 86.92%, and 75.00% in the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), the Berlin Database of Emotional Speech (EmoDB), and Surrey Audio-Visual Expressed Emotion (SAVEE), respectively. Experimental results on RAVDESS, EmoDB, SAVEE demonstrate improvements of 7.73%, 15.12%, and 2.34% compared to existing state-of-the-art cross-corpus SER approaches.
Keywords: bag of visual words; convolutional neural network; cross-corpus; log-mel spectrograms; speech emotion recognition; visual attention.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
References
-
- Akcay M.B., Oguz K. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Commun. 2019;116:56–76. doi: 10.1016/j.specom.2019.12.001. - DOI
-
- Rajan S., Chenniappan P., Devaraj S., Madian N. Facial expression recognition techniques: A comprehensive survey. IET Image Process. 2019;13:1031–1041. doi: 10.1049/iet-ipr.2018.6647. - DOI
-
- Li T.M., Chao H.C., Zhang J. Emotion classification based on brain wave: A survey. Hum. Cent. Comput. Inf. Sci. 2019;9:42. doi: 10.1186/s13673-019-0201-x. - DOI
-
- Minaee S., Abdolrashidi A., Su H., Bennamoun M., Zhang D. Biometric Recognition Using Deep Learning: A survey. arXiv. 20191912.00271
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources