Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 23;23(7):3388.
doi: 10.3390/s23073388.

Human-Aware Collaborative Robots in the Wild: Coping with Uncertainty in Activity Recognition

Affiliations

Human-Aware Collaborative Robots in the Wild: Coping with Uncertainty in Activity Recognition

Beril Yalçinkaya et al. Sensors (Basel). .

Abstract

This study presents a novel approach to cope with the human behaviour uncertainty during Human-Robot Collaboration (HRC) in dynamic and unstructured environments, such as agriculture, forestry, and construction. These challenging tasks, which often require excessive time, labour and are hazardous for humans, provide ample room for improvement through collaboration with robots. However, the integration of humans in-the-loop raises open challenges due to the uncertainty that comes with the ambiguous nature of human behaviour. Such uncertainty makes it difficult to represent high-level human behaviour based on low-level sensory input data. The proposed Fuzzy State-Long Short-Term Memory (FS-LSTM) approach addresses this challenge by fuzzifying ambiguous sensory data and developing a combined activity recognition and sequence modelling system using state machines and the LSTM deep learning method. The evaluation process compares the traditional LSTM approach with raw sensory data inputs, a Fuzzy-LSTM approach with fuzzified inputs, and the proposed FS-LSTM approach. The results show that the use of fuzzified inputs significantly improves accuracy compared to traditional LSTM, and, while the fuzzy state machine approach provides similar results than the fuzzy one, it offers the added benefits of ensuring feasible transitions between activities with improved computational efficiency.

Keywords: deep learning; finite state machine; fuzzy logic; human activity recognition and modelling; human-robot collaboration; long short—term memory.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
A view on the work field of the FEROX Project.
Figure 2
Figure 2
An overview of the use case scenario.
Figure 3
Figure 3
The avatar performs locomotive actions.
Figure 4
Figure 4
The parent-child relationship is adopted to obtain the position data in virtual IMU’s local frame.
Figure 5
Figure 5
The confusion matrix of the LSTM network.
Figure 6
Figure 6
The diagram of the proposed architecture.
Figure 7
Figure 7
Motion and Tilt plotting along an activity sequence.
Figure 8
Figure 8
LSTM training datasets.
Figure 9
Figure 9
Accessing uncertainty in the activity recognition.
Figure 10
Figure 10
The confusion matrix of Traditional LSTM, Fuzzy-LSTM and FS-LSTM. Superior classification accuracy results are identified in bold.
Figure 11
Figure 11
The benchmark of three predicted output sequences via Traditional LSTM, Fuzzy-LSTM and FS-LSTM. The Lost state is marked with the red circle.
Figure 12
Figure 12
The benchmark of GPU utilization during testing. Fuzzy-LSTM and FS-LSTM networks trained with 32 layers (denoted with *), and a hybrid version of FS-LSTM (denoted with **).
Figure 13
Figure 13
The benchmark of power consumption during testing. Fuzzy-LSTM and FS-LSTM networks trained with 32 layers (denoted with *), and a hybrid version of FS-LSTM (denoted with **).
Figure 14
Figure 14
The confusion matrix of Fuzzy-LSTM* (i), FS-LSTM* (ii) and FS-LSTM** (iii). Superior classification accuracy results are identified in bold.

References

    1. Villani V., Pini F., Leali F., Secchi C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics. 2018;55:248–266. doi: 10.1016/j.mechatronics.2018.02.009. - DOI
    1. Ajoudani A., Zanchettin A.M., Ivaldi S., Albu-Schäffer A., Kosuge K., Khatib O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018;42:957–975. doi: 10.1007/s10514-017-9677-2. - DOI
    1. Chueshev A., Melekhova O., Meshcheryakov R. Cloud Robotic Platform on Basis of Fog Computing Approach. In: Ronzhin A., Rigoll G., Meshcheryakov R., editors. Interactive Collaborative Robotics, Proceedings of the Interactive Collaborative Robotics, Leipzig, Germany, 18–22 September 2018. Springer International Publishing; Cham, Switzerland: 2018. pp. 34–43.
    1. Rodriguez-Losada D., Matia F., Jimenez A., Galan R., Lacey G. Implementing Map Based Navigation in Guido, the Robotic SmartWalker; Proceedings of the 2005 IEEE International Conference on Robotics and Automation; Barcelona, Spain. 18–22 April 2005; pp. 3390–3395. - DOI
    1. Jia P., Hu H. Head gesture based control of an intelligent wheelchair; Proceedings of the 11th Annual Conference of the Chinese Automation and Computing Society in the UK [CACSUK05]; Sheffield, UK. 10 September 2005; pp. 85–90.

LinkOut - more resources