Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Mar 19;21(6):2166.
doi: 10.3390/s21062166.

DRER: Deep Learning-Based Driver's Real Emotion Recognizer

Affiliations

DRER: Deep Learning-Based Driver's Real Emotion Recognizer

Geesung Oh et al. Sensors (Basel). .

Abstract

In intelligent vehicles, it is essential to monitor the driver's condition; however, recognizing the driver's emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver's emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver's real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers' real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver's real emotional state. Hence, we categorized the driver's emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver's induced emotion while driving situation.

Keywords: deep learning; driver’s emotional state; emotion recognition; human–machine interface; real emotion; sensor fusion.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Overview of the proposed work with two major steps: FER and SFER.
Figure 2
Figure 2
(a) VGGNet with vanilla CNN (blue) and max-pooling (green). (b) ResNet with vanilla CNN (blue), max-pooling (green), and shortcut connection (orange). (c) ResNeXt with vanilla CNN (blue), max-pooling (green) and shortcut connection (orange). (d) SE-ResNet with vanilla CNN (blue), max-pooling (green), shortcut connection (orange) and SE block (yellow).
Figure 3
Figure 3
Illustration of the SE block; different colors represent each different channel.
Figure 4
Figure 4
The correlations between the valence and arousal of defined eight emotions.
Figure 5
Figure 5
Sample images in the AffectNet, including faces of people of different races, ages and gender.
Figure 6
Figure 6
(a) The participants’ emotions are induced through video viewing. (b) The participants describe their own experiences related to the emotions induced.
Figure 7
Figure 7
(a) Three-channel projectors and screens and the cabin of the full-scale driving simulator. (b) The camera is installed between the windshield and the headliner (red) and the biomedical instrument is set on the driver’s wrist for EDA (green).
Figure 8
Figure 8
Part of raw and average filtering data of measuring EDA electrical conductance while one of simulation driving.
Figure 9
Figure 9
L2 loss function on the validation set of the proposed FER model training.
Figure 10
Figure 10
Cross-entropy loss on validation set of the proposed SFER model training.
Figure 11
Figure 11
ROC curve for each defined emotion.

References

    1. Underwood G., Chapman P., Wright S., Crundall D. Anger while driving. Transp. Res. Part F Traffic Psychol. Behav. 1999;2:55–68. doi: 10.1016/S1369-8478(99)00006-6. - DOI
    1. Jeon M. Don’t cry while you’re driving: Sad driving is as bad as angry driving. Int. J. Hum. Comput. Interact. 2016;32:777–790. doi: 10.1080/10447318.2016.1198524. - DOI
    1. Kahou S.E., Bouthillier X., Lamblin P., Gulcehre C., Michalski V., Konda K., Jean S., Froumenty P., Dauphin Y., Boulanger-Lewandowski N., et al. Emonets: Multimodal deep learning approaches for emotion recognition in video. J. Multimodal User Interfaces. 2016;10:99–111. doi: 10.1007/s12193-015-0195-2. - DOI
    1. Fan Y., Lu X., Li D., Liu Y. Video-based emotion recognition using CNN-RNN and C3D hybrid networks; Proceedings of the 18th ACM International Conference on Multimodal Interaction; Tokyo, Japan. 12–16 November 2016; pp. 445–450.
    1. Gao H., Yüce A., Thiran J.P. Detecting emotional stress from facial expressions for driving safety; Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP); Paris, France. 27–30 October 2014; pp. 5961–5965.

LinkOut - more resources