PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation
- PMID: 32178238
- PMCID: PMC7146160
- DOI: 10.3390/s20061573
PLIN: A Network for Pseudo-LiDAR Point Cloud Interpolation
Abstract
LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.
Keywords: 3D point cloud; convolutional neural networks; depth completion; pseudo-LiDAR interpolation; video interpolation.
Conflict of interest statement
The authors declare no conflict of interest.
Figures





Similar articles
-
Efficient Stereo Depth Estimation for Pseudo-LiDAR: A Self-Supervised Approach Based on Multi-Input ResNet Encoder.Sensors (Basel). 2023 Feb 2;23(3):1650. doi: 10.3390/s23031650. Sensors (Basel). 2023. PMID: 36772689 Free PMC article.
-
Real-time depth completion based on LiDAR-stereo for autonomous driving.Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023. Front Neurorobot. 2023. PMID: 37144086 Free PMC article.
-
3D Point Cloud Recognition Based on a Multi-View Convolutional Neural Network.Sensors (Basel). 2018 Oct 29;18(11):3681. doi: 10.3390/s18113681. Sensors (Basel). 2018. PMID: 30380691 Free PMC article.
-
LiDAR Technology for UAV Detection: From Fundamentals and Operational Principles to Advanced Detection and Classification Techniques.Sensors (Basel). 2025 Apr 27;25(9):2757. doi: 10.3390/s25092757. Sensors (Basel). 2025. PMID: 40363196 Free PMC article. Review.
-
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving.Sensors (Basel). 2022 Dec 7;22(24):9577. doi: 10.3390/s22249577. Sensors (Basel). 2022. PMID: 36559950 Free PMC article. Review.
References
-
- Shi S., Wang X., Li H. PointRcnn: 3D object proposal generation and detection from point cloud; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. 15–20 June 2019; pp. 770–779.
-
- Qi C.R., Liu W., Wu C., Su H., Guibas L.J. Frustum pointnets for 3D object detection from RGB-D data; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Salt Lake City, UT, USA. 18–23 June 2018; pp. 918–927.
-
- Wu B., Wan A., Yue X., Keutzer K. Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3D lidar point cloud; Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Brisbane, Australia. 21–25 May 2018; pp. 1887–1893.
-
- Wu B., Zhou X., Zhao S., Yue X., Keutzer K. SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud; Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Montreal, QC, Canada. 20–24 May 2019; pp. 4376–4382.
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources