End-to-End Training for Compound Expression Recognition
- PMID: 32825666
- PMCID: PMC7506941
- DOI: 10.3390/s20174727
End-to-End Training for Compound Expression Recognition
Abstract
For a long time, expressions have been something that human beings are proud of. That is an essential difference between us and machines. With the development of computers, we are more eager to develop communication between humans and machines, especially communication with emotions. The emotional growth of computers is similar to the growth process of each of us, starting with a natural, intimate, and vivid interaction by observing and discerning emotions. Since the basic emotions, angry, disgusted, fearful, happy, neutral, sad and surprised are put forward, there are many researches based on basic emotions at present, but few on compound emotions. However, in real life, people's emotions are complex. Single expressions cannot fully and accurately show people's inner emotional changes, thus, exploration of compound expression recognition is very essential to daily life. In this paper, we recommend a scheme of combining spatial and frequency domain transform to implement end-to-end joint training based on model ensembling between models for appearance and geometric representations learning for the recognition of compound expressions in the wild. We are mainly devoted to digging the appearance and geometric information based on deep learning models. For appearance feature acquisition, we adopt the idea of transfer learning, introducing the ResNet50 model pretrained on VGGFace2 for face recognition to implement the fine-tuning process. Here, we try and compare two minds, one is that we utilize two static expression databases FER2013 and RAF Basic for basic emotion recognition to fine tune, the other is that we fine tune the model on the input three channels composed of images generated by DWT2 and WAVEDEC2 wavelet transforms based on rbio3.1 and sym1 wavelet bases respectively. For geometric feature acquisition, we firstly introduce a densesift operator to extract facial key points and their histogram descriptions. After that, we introduce deep SAE with a softmax function, stacked LSTM and Sequence-to-Sequence with stacked LSTM and define their structures by ourselves. Then, we feed the salient key points and their descriptions into three models to train respectively and compare their performances. When the model training for appearance and geometric features learning is completed, we combine the two models with category labels to achieve further end-to-end joint training, considering that ensembling models, which describe different information, can further improve recognition results. Finally, we validate the performance of our proposed framework on an RAF Compound database and achieve a recognition rate of 66.97%. Experiments show that integrating different models, which express different information, and achieving end-to-end training can quickly and effectively improve the performance of the recognition.
Keywords: Sequence-to-Sequence; appearance feature; compound expression; deep SAE; end-to-end; frequency domain transform; geometric feature; joint training; model ensembling; stacked LSTM.
Conflict of interest statement
The authors declare no conflict of interest.
Figures















References
-
- Mehrabian A. Nonverbal Communication. Routledge; Abingdon, UK: 2017.
-
- Darwin C., Prodger P. The Expression of the Emotions in Man and Animals. Oxford University Press; Oxford, MS, USA: 1998.
-
- Suwa M. A preliminary note on pattern recognition of human emotional expression; Proceedings of the 4th International Joint Conference on Pattern Recognition; Kyoto, Japan. 7–10 November 1978; pp. 408–410.
-
- Mase K. An Application of Optical Flow-Extraction of Facial Expression-; Proceedings of the MVA; Tokyo, Japan. 28–30 November 1990; pp. 195–198.
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Research Materials
Miscellaneous