PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation
- PMID: 32178238
- PMCID: PMC7146160
- DOI: 10.3390/s20061573
PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation
Abstract
LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.
Keywords: 3D point cloud; convolutional neural networks; depth completion; pseudo-LiDAR interpolation; video interpolation.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
References
-
- Shi S., Wang X., Li H. PointRcnn: 3D object proposal generation and detection from point cloud; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. 15–20 June 2019; pp. 770–779.
-
- Qi C.R., Liu W., Wu C., Su H., Guibas L.J. Frustum pointnets for 3D object detection from RGB-D data; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Salt Lake City, UT, USA. 18–23 June 2018; pp. 918–927.
-
- Wu B., Wan A., Yue X., Keutzer K. Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3D lidar point cloud; Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Brisbane, Australia. 21–25 May 2018; pp. 1887–1893.
-
- Wu B., Zhou X., Zhao S., Yue X., Keutzer K. SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud; Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Montreal, QC, Canada. 20–24 May 2019; pp. 4376–4382.
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
