Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 May 22;24(11):3311.
doi: 10.3390/s24113311.

Implementation of Engagement Detection for Human-Robot Interaction in Complex Environments

Affiliations

Implementation of Engagement Detection for Human-Robot Interaction in Complex Environments

Sin-Ru Lu et al. Sensors (Basel). .

Abstract

This study develops a comprehensive robotic system, termed the robot cognitive system, for complex environments, integrating three models: the engagement model, the intention model, and the human-robot interaction (HRI) model. The system aims to enhance the naturalness and comfort of HRI by enabling robots to detect human behaviors, intentions, and emotions accurately. A novel dual-arm-hand mobile robot, Mobi, was designed to demonstrate the system's efficacy. The engagement model utilizes eye gaze, head pose, and action recognition to determine the suitable moment for interaction initiation, addressing potential eye contact anxiety. The intention model employs sentiment analysis and emotion classification to infer the interactor's intentions. The HRI model, integrated with Google Dialogflow, facilitates appropriate robot responses based on user feedback. The system's performance was validated in a retail environment scenario, demonstrating its potential to improve the user experience in HRIs.

Keywords: action recognition; cognitive system; engagement; human behaviors; human–robot interaction.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The four layers in HRI. The design of robots should satisfy all design conditions from safety to sociability.
Figure 2
Figure 2
Robot cognitive system. The system can be divided into three parts: engagement model, intention model, and HRI model. In engagement model, head poses, eye angles, and actions are analyzed by engagement HMM to generate appropriate state of engagement. After acquiring the output from engagement HMM, intention model is utilized to identify the intention state of targets. To provide better interaction experience, HRI model is included as a policy system of HRI.
Figure 3
Figure 3
Structure of the gaze tracker. At first, the 2D eye region is processed by eye gaze detector. To restore the eye region image, SRWGAN is included in our model. Finally, a CNN-based classifier is trained to obtain the eye angles.
Figure 4
Figure 4
Model of emotion classifier.
Figure 5
Figure 5
HRI model with google DialogFlow.
Figure 6
Figure 6
Engagement equation.
Figure 7
Figure 7
The hardware of Mobi.
Figure 8
Figure 8
Scenario of our experiment.
Figure 9
Figure 9
The people interact with Mobi.
Figure 10
Figure 10
The engagement state of subject.
Figure 11
Figure 11
Engagement comfort index changes with time. At 2 s, the index reaches −0.2 such that the start of HRI is detected. Before 10 s, the subject continues staring at Mobi, making the index keep declining to −1.0.
Figure 12
Figure 12
The subjects interact with Mobi.
Figure 13
Figure 13
Mobi identified the interactors using engagement comfort index.
Figure 14
Figure 14
Mobi helped the customer to find the goods.

Similar articles

Cited by

References

    1. Fraboni F., Brendel H., Pietrantonib L., Vidoni R., Dallasega P., Gualtieri L. Updating Design Guidelines for Cognitive Ergonomics in Human-Centred Collaborative Robotics Applications: An Expert Survey. Appl. Ergon. 2024;117:104246. - PubMed
    1. Moriuchi E., Murdy S. The Role of Robots in the Service Industry: Factors Affecting Human-Robot Interactions. Int. J. Hosp. Manag. 2024;118:103682. doi: 10.1016/j.ijhm.2023.103682. - DOI
    1. Abiodun Odesanmi G., Wang Q., Mai J. Skill Learning Framework for Human–Robot Interaction and Manipulation Tasks. Robot. Comput.-Integr. Manuf. 2023;79:102444. doi: 10.1016/j.rcim.2022.102444. - DOI
    1. Li C., Zheng P., Yin Y., Pang Y.M., Huo S. An AR-assisted Deep Reinforcement Learning-based approach towards mutual-cognitive safe human-robot interaction. Robot. Comput.-Integr. Manuf. 2023;80:102471. doi: 10.1016/j.rcim.2022.102471. - DOI
    1. Li S., Zheng P., Liu S., Wang Z., Wang X.V., Zheng L., Wang L. Proactive Human-Robot Collaboration: Mutual-Cognitive, Predictable, and Self-Organising Perspectives. Robot. Comput.-Integr. Manuf. 2023;81:102510. doi: 10.1016/j.rcim.2022.102510. - DOI

LinkOut - more resources