Facial Expression Recognition Methods in the Wild Based on Fusion Feature of Attention Mechanism and LBP
- PMID: 37177408
- PMCID: PMC10180539
- DOI: 10.3390/s23094204
Facial Expression Recognition Methods in the Wild Based on Fusion Feature of Attention Mechanism and LBP
Abstract
Facial expression methods play a vital role in human-computer interaction and other fields, but there are factors such as occlusion, illumination, and pose changes in wild facial recognition, as well as category imbalances between different datasets, that result in large variations in recognition rates and low accuracy rates for different categories of facial expression datasets. This study introduces RCL-Net, a method of recognizing wild facial expressions that is based on an attention mechanism and LBP feature fusion. The structure consists of two main branches, namely the ResNet-CBAM residual attention branch and the local binary feature (LBP) extraction branch (RCL-Net). First, by merging the residual network and hybrid attention mechanism, the residual attention network is presented to emphasize the local detail feature information of facial expressions; the significant characteristics of facial expressions are retrieved from both channel and spatial dimensions to build the residual attention classification model. Second, we present a locally improved residual network attention model. LBP features are introduced into the facial expression feature extraction stage in order to extract texture information on expression photographs in order to emphasize facial feature information and enhance the recognition accuracy of the model. Lastly, experimental validation is performed using the FER2013, FERPLUS, CK+, and RAF-DB datasets, and the experimental results demonstrate that the proposed method has superior generalization capability and robustness in the laboratory-controlled environment and field environment compared to the most recent experimental methods.
Keywords: LBP features; attention mechanism; deep learning; facial expression recognition.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
















References
-
- Li S., Deng W. Deep Facial Expression Recognition: A Survey. IEEE Trans. Affective Comput. 2022;13:1195–1215. doi: 10.1109/TAFFC.2020.2981446. - DOI
-
- Lucey P., Cohn J.F., Kanade T., Saragih J., Ambadar Z., Matthews I. The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-Specified Expression; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops; San Francisco, CA, USA. 13–18 June 2010; pp. 94–101.
-
- Lyons M., Akamatsu S., Kamachi M., Gyoba J. Coding Facial Expressions with Gabor Wavelets; Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition; Nara, Japan. 14–16 April 1998; pp. 200–205.
-
- Valstar M.F., Pantic M. Induced Disgust, Happiness and Surprise: An Addition to the MMI Facial Expression Database; Proceedings of the 3rd International Workshop on Emotion; Paris, France. 29 October 2010.
-
- Zhao G., Huang X., Taini M., Li S.Z., Pietikäinen M. Facial Expression Recognition from Near-Infrared Videos. Image Vis. Comput. 2011;29:607–619. doi: 10.1016/j.imavis.2011.07.002. - DOI
MeSH terms
Grants and funding
- 2018YFB1306601/National Key R&D Program(2018YFB1306601)
- 2019-90/Chinese Academy of Sciences "Light of the West" Talent Training Introduction Program(2019-90)
- HZ2021011/Cooperation projects between Chongqing universities in Chongqing and institutions affiliated with Chinese Academy of Sciences(HZ2021011)
- cstc2021jscx-cylhX0009/Chongqing technology innovation and application development special(cstc2021jscx-cylhX0009)
LinkOut - more resources
Full Text Sources
Research Materials
Miscellaneous