Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Nov 4;21(21):7346.
doi: 10.3390/s21217346.

Autonomous Thermal Vision Robotic System for Victims Recognition in Search and Rescue Missions

Affiliations

Autonomous Thermal Vision Robotic System for Victims Recognition in Search and Rescue Missions

Christyan Cruz Ulloa et al. Sensors (Basel). .

Abstract

Technological breakthroughs in recent years have led to a revolution in fields such as Machine Vision and Search and Rescue Robotics (SAR), thanks to the application and development of new and improved neural networks to vision models together with modern optical sensors that incorporate thermal cameras, capable of capturing data in post-disaster environments (PDE) with rustic conditions (low luminosity, suspended particles, obstructive materials). Due to the high risk posed by PDE because of the potential collapse of structures, electrical hazards, gas leakage, etc., primary intervention tasks such as victim identification are carried out by robotic teams, provided with specific sensors such as thermal, RGB cameras, and laser. The application of Convolutional Neural Networks (CNN) to computer vision is a breakthrough for detection algorithms. Conventional methods for victim identification in these environments use RGB image processing or trained dogs, but detection with RGB images is inefficient in the absence of light or presence of debris; on the other hand, developments with thermal images are limited to the field of surveillance. This paper's main contribution focuses on implementing a novel automatic method based on thermal image processing and CNN for victim identification in PDE, using a Robotic System that uses a quadruped robot for data capture and transmission to the central station. The robot's automatic data processing and control have been carried out through Robot Operating System (ROS). Several tests have been carried out in different environments to validate the proposed method, recreating PDE with varying conditions of light, from which the datasets have been generated for the training of three neural network models (Fast R-CNN, SSD, and YOLO). The method's efficiency has been tested against another method based on CNN and RGB images for the same task showing greater effectiveness in PDE main results show that the proposed method has an efficiency greater than 90%.

Keywords: ROS; Unitree A1; computer vision; convolutional neural networks; robotic systems; search and rescue robots; thermal images.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Indoor and outdoor scenarios are used for test development. (a) ETSII-UPM Outdoor Testing Environment. (b) Scenarios recreated for indoor testing. (c) Scenarios recreated for indoor testing—top view. Source: Authors.
Figure 2
Figure 2
Robot and instrumentation used for the proposed method validation. (a) Unitree A1 Robot equipped with thermal camera and Real-Sense. (b) Optris Pi640 Thermal Camera.
Figure 3
Figure 3
Subsystems integration for the detection of victims in PDE. Source: Authors.
Figure 4
Figure 4
Example of thin-film transmissivity. Source: Authors.
Figure 5
Figure 5
Robot Unitree in different scenarios. (a) Robot in indoors (good light conditions). (b) Robot in outdoors (bad light conditions). Source: Authors.
Figure 6
Figure 6
Different datasets used in training. (a) Night dataset. (b) Day dataset. (c) Combined dataset. Source: Authors.
Figure 7
Figure 7
Length-to-width ratio to detect victims. Source: Authors.
Figure 8
Figure 8
mAP, recall, and loss for networks with night dataset. (a) mAP, (b) recall, and (c) loss. Source: Authors.
Figure 9
Figure 9
mAP, recall, and loss for networks with day dataset. (a) mAP, (b) recall, and (c) loss. Source: Authors.
Figure 10
Figure 10
mAP, recall, and loss for networks with combined dataset. (a) mAP, (b) recall, and (c) loss. Source: Authors.
Figure 11
Figure 11
mAP, recall, and loss comparison for the YOLOv3 datasets. (a) mAP, (b) recall, and (c) loss. Source: Authors.
Figure 12
Figure 12
Average class precision for YOLOv3. Source: Authors.
Figure 13
Figure 13
Examples of victim detection with Faster R-CNN, SSD, and YOLOv3. The efficiency in the detection of victims respectively is a = 99%, b = 60%, c = 98%, d = 97%, e = 96%, and f = 99%. (a) Faster R-CNN, (b) SSD, (c) YOLOv3 (Day outdoor), (d) YOLOv3 (Day indoor), (e) YOLOv3 (Day outdoor), and (f) YOLOv3 (Night indoor). Source: Authors.
Figure 14
Figure 14
Evaluation of the conventional method that uses RGB images for the detection of victims with good lighting conditions, using CNN-YOLOv3. (a) Neural Network Training. (b) Outdoor evaluation. (c) Indoor evaluation. Source: Authors.
Figure 15
Figure 15
Evaluation of the conventional methods in front of proposed method for victims detection with different lighting conditions, using CNN-YOLOv3. (a) Case 1: Bad detection of RGB method in low light. (b) Case 1: Good detection of Thermal method in low light. (c) Case 2: Bad detection of RGB method in absence of light. (d) Case 2: Good detection of Thermal method in absence of light. (e) Case 3: Good detection of victims (fully covered) for the thermal method. (f) Case 4: Good detection of victims (partially covered) for the thermal method. (g) Case 5: Good detection of people in front of heat sources for RGB method. (h) Case 5: Bad detection of people in front of heat sources for Thermal method. Source: Authors.
Figure 16
Figure 16
Percentage comparison of efficiency of the analyzed methods (Thermal and RGB). Source: Authors.
Figure 17
Figure 17
Victims location in the mapped environment. (a) Victim detection in reconstructed Scenario 1. (b) Victim detection in reconstructed Scenario 2. Source: Authors.

References

    1. UNDRR Home. [(accessed on 23 June 2021)]. Available online: https://www.undrr.org/
    1. Noticias ONU: Pese al Aumento de las Amenazas de Origen Natural en el Siglo XXI, los Países Siguen “Sembrando las Semillas de su Destrucción”. 2020. [(accessed on 23 June 2021)]. Available online: https://news.un.org/es/story/2020/10/1482242.
    1. Drew D.S. Multi-Agent Systems for Search and Rescue Applications. Curr. Robot. Rep. 2021;2:189–200. doi: 10.1007/s43154-021-00048-3. - DOI
    1. Delmerico J., Mintchev S., Giusti A., Gromov B., Melo K., Horvat T., Cadena C., Hutter M., Ijspeert A., Floreano D., et al. The current state and future outlook of rescue robotics. J. Field Robot. 2019;36:1171–1191. doi: 10.1002/rob.21887. - DOI
    1. Queralta J.P., Taipalmaa J., Pullinen B.C., Sarker V.K., Gia T.N., Tenhunen H., Gabbouj M., Raitoharju J., Westerlund T. Collaborative multi-robot systems for search and rescue: Coordination and perception. arXiv. 20202008.12610