Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 Nov 14;13(11):15549-81.
doi: 10.3390/s131115549.

A multimodal emotion detection system during human-robot interaction

Affiliations

A multimodal emotion detection system during human-robot interaction

Fernando Alonso-Martín et al. Sensors (Basel). .

Abstract

In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
The multimodal interaction system Robotics Dialog System (RDS).
Figure 2.
Figure 2.
Two kinds of fusion levels: decision and feature extraction level. (a) A unique classifier (fusion at the feature extraction level); (b) one classifier for each channel (fusion at the decision level).
Figure 3.
Figure 3.
Multimodal emotion detection system.
Figure 4.
Figure 4.
The three audio domains in which voice feature extraction is performed.
Figure 5.
Figure 5.
Rotation parameters: roll, pitch and yaw.
Figure 6.
Figure 6.
Scheme of the process for determining the main user emotion in each communicative act (CA).
Figure 7.
Figure 7.
The robot used in the experiments.
Figure 8.
Figure 8.
Image taken during the experiments carried out in the ISTin Lisbon. (a) Gender and Emotion Voice Analysis (GEVA); (b) Computer Expression Recognition Toolbox (CERT); (c) Sophisticated High-speed Object Recognition Engine (SHORE).
Figure 9.
Figure 9.
The robot, Maggie.

References

    1. Picard R. Affective Computing for HCI. MIT Press; Munich, Germany: 1999.
    1. Alonso-Martín F., Gorostiza J.F., Salichs M.A. Descripción General del Sistema de Interacción Humano-Robot Robotics Dialog System (RDS). Proceedings of the Robocity 2030 12th Workshop: Robótica Cognitiva; Madrid, Spain. July 2013; pp. 1–13.
    1. Alonso-Martin F., Gorostiza J.F., Salichs M.A. Preliminary Experiments on HRI for Improvement the Robotic Dialog System (RDS). Proceedings of the Robocity 2030 11th Workshop: Robots Sociales; Leganés, Spain. March 2013.
    1. Rich C., Ponsler B. Recognizing Engagement in Human-Robot Interaction. Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI); Osaka, Japan. 2–5 March 2010; pp. 375–382.
    1. Arnold M. Emotion and Personality. Columbia University Press; New York, NY, USA: 1960.

Publication types