Evaluation of the Azure Kinect and Its Comparison to Kinect V1 and Kinect V2
- PMID: 33430149
- PMCID: PMC7827245
- DOI: 10.3390/s21020413
Evaluation of the Azure Kinect and Its Comparison to Kinect V1 and Kinect V2
Abstract
The Azure Kinect is the successor of Kinect v1 and Kinect v2. In this paper we perform brief data analysis and comparison of all Kinect versions with focus on precision (repeatability) and various aspects of noise of these three sensors. Then we thoroughly evaluate the new Azure Kinect; namely its warm-up time, precision (and sources of its variability), accuracy (thoroughly, using a robotic arm), reflectivity (using 18 different materials), and the multipath and flying pixel phenomenon. Furthermore, we validate its performance in both indoor and outdoor environments, including direct and indirect sun conditions. We conclude with a discussion on its improvements in the context of the evolution of the Kinect sensor. It was shown that it is crucial to choose well designed experiments to measure accuracy, since the RGB and depth camera are not aligned. Our measurements confirm the officially stated values, namely standard deviation ≤17 mm, and distance error <11 mm in up to 3.5 meters distance from the sensor in all four supported modes. The device, however, has to be warmed up for at least 40-50 min to give stable results. Due to the time-of-flight technology, the Azure Kinect cannot be reliably used in direct sunlight. Therefore, it is convenient mostly for indoor applications.
Keywords: 3D scanning; Azure Kinect; HRI (human–robot interaction); Kinect; SLAM (simultaneous localization and mapping); depth imaging; gesture recognition; mapping; object recognition; robotics.
Conflict of interest statement
The authors declare there are no conflicts of interest.
Figures
References
-
- Elaraby A.F., Hamdy A., Rehan M. A Kinect-Based 3D Object Detection and Recognition System with Enhanced Depth Estimation Algorithm; Proceedings of the 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON); Vancouver, BC, Canada. 1–3 November 2018; pp. 247–252. - DOI
-
- Tanabe R., Cao M., Murao T., Hashimoto H. Vision based object recognition of mobile robot with Kinect 3D sensor in indoor environment; Proceedings of the 2012 Proceedings of SICE Annual Conference (SICE); Akita, Japan. 20–23 August 2012; pp. 2203–2206.
-
- Manap M.S.A., Sahak R., Zabidi A., Yassin I., Tahir N.M. Object Detection using Depth Information from Kinect Sensor; Proceedings of the 2015 IEEE 11th International Colloquium on Signal Processing & Its Applications (CSPA); Kuala Lumpur, Malaysia. 6–8 March 2015; pp. 160–163.
-
- Xin G.X., Zhang X.T., Wang X., Song J. A RGBD SLAM algorithm combining ORB with PROSAC for indoor mobile robot; Proceedings of the 2015 4th International Conference on Computer Science and Network Technology (ICCSNT); 19–20 December 2015; pp. 71–74.
-
- Henry P., Krainin M., Herbst E., Ren X., Fox D. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012;31:647–663. doi: 10.1177/0278364911434148. - DOI
LinkOut - more resources
Full Text Sources
Other Literature Sources
