Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Mar 12;20(6):1573.
doi: 10.3390/s20061573.

PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation

Affiliations

PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation

Haojie Liu et al. Sensors (Basel). .

Abstract

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.

Keywords: 3D point cloud; convolutional neural networks; depth completion; pseudo-LiDAR interpolation; video interpolation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Overall pipeline of the proposed method. PLIN aims to address the mismatching problem of frequency between camera and LiDAR sensors, generating both temporally and spatially high-quality point cloud sequences. Our method takes three consecutive color images and two sparse depth maps as inputs, and interpolates an intermediate dense depth map, which is further transformed into a Pseudo-LiDAR point cloud using camera intrinsics.
Figure 2
Figure 2
Overview of the proposed Pseudo-LiDAR interpolation network (PLIN). The whole architecture consists of three modules, including the motion guidance module, scene guidance module and transformation module.
Figure 3
Figure 3
Results of interpolated depth map obtained by PLIN. From left to right, the color image as input is depicted in the first column. In column 2 the sparse depth map corresponding to the color image is presented, in column 3 the dense depth map represents the ground truth of network training. Finally, the result of our network prediction is depicted in column 4. Our method can recover the original depth information and generate much denser distributions.
Figure 4
Figure 4
Visual results of the ablation study. We show the color image, interpolated dense depth map, two views of the generated Pseudo-LiDAR, and enlarged areas. The complete network produces more accurate depth map, and the distribution and shape of Pseudo-LiDAR are more similar to those of the ground truth point cloud.
Figure 5
Figure 5
Visual comparisons of the point cloud obtained by different methods. We show the intermediate color images, ground truth, and interpolation result of Pseudo-LiDAR point clouds by three methods. Our model produces outlines and boundary regions for small objects (such as cars and people) that are more similar to ground truth.

Similar articles

References

    1. Shi S., Wang X., Li H. PointRcnn: 3D object proposal generation and detection from point cloud; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. 15–20 June 2019; pp. 770–779.
    1. Qi C.R., Liu W., Wu C., Su H., Guibas L.J. Frustum pointnets for 3D object detection from RGB-D data; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Salt Lake City, UT, USA. 18–23 June 2018; pp. 918–927.
    1. Li M., Hu Y., Zhao N., Qian Q. One-Stage Multi-Sensor Data Fusion Convolutional Neural Network for 3D Object Detection. Sensors. 2019;19:1434. doi: 10.3390/s19061434. - DOI - PMC - PubMed
    1. Wu B., Wan A., Yue X., Keutzer K. Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3D lidar point cloud; Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Brisbane, Australia. 21–25 May 2018; pp. 1887–1893.
    1. Wu B., Zhou X., Zhao S., Yue X., Keutzer K. SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud; Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Montreal, QC, Canada. 20–24 May 2019; pp. 4376–4382.

LinkOut - more resources