Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Mar 28;19(7):1514.
doi: 10.3390/s19071514.

Data-Driven Point Cloud Objects Completion

Affiliations

Data-Driven Point Cloud Objects Completion

Yang Zhang et al. Sensors (Basel). .

Abstract

With the development of the laser scanning technique, it is easier to obtain 3D large-scale scene rapidly. However, many scanned objects may suffer serious incompletion caused by the scanning angles or occlusion, which has severely impacted their future usage for the 3D perception and modeling, while traditional point cloud completion methods often fails to provide satisfactory results due to the large missing parts. In this paper, by utilising 2D single-view images to infer 3D structures, we propose a data-driven Point Cloud Completion Network ( P C C N e t ), which is an image-guided deep-learning-based object completion framework. With the input of incomplete point clouds and the corresponding scanned image, the network can acquire enough completion rules through an encoder-decoder architecture. Based on an attention-based 2D-3D fusion module, the network is able to integrate 2D and 3D features adaptively according to their information integrity. We also propose a projection loss as an additional supervisor to have a consistent spatial distribution from multi-view observations. To demonstrate the effectiveness, first, the proposed P C C N e t is compared to recent generative networks and has shown more powerful 3D reconstruction abilities. Then, P C C N e t is compared to a recent point cloud completion methods, which has demonstrate that the proposed P C C N e t is able to provide satisfied completion results for objects with large missing parts.

Keywords: 3D reconstruction; mobile laser scanning; point cloud generation; point cloud object completion; single image.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The scanned point clouds of a parking place.
Figure 2
Figure 2
The sample images of reconstruction and completion on Mobile Laser Scanning (MLS) point clouds. (a) The real street images. (b) The scanned point clouds. (c) The generated point clouds (rendered). (d) The merged point clouds.
Figure 3
Figure 3
The framework of PCCNet.
Figure 4
Figure 4
The component of our MLS system.
Figure 5
Figure 5
The procedure of making MLS pairs.
Figure 6
Figure 6
The procedure of obtaining individual MLS objects.
Figure 7
Figure 7
Results on rendered images. (a) Rendered images. (b) Ground truth. (c) Generated point clouds by PCCNet.
Figure 8
Figure 8
Two samples of projection from the same viewpoint. (a) Rendered input images. (b) Projection of the ground truth. (c) Projection of the generated shapes.
Figure 9
Figure 9
Comparison on training loss curve. The red line displays the training process without projection loss, and the blue line is with projection loss.
Figure 10
Figure 10
Car images from ObjectNet3D [30]. The orders from left to right: original images, the results of PCCNet and OGN.
Figure 11
Figure 11
Car images from Internet. The orders from left to right: original images, the results of PCCNet and PSGN.
Figure 12
Figure 12
Results of the MLS objects completion. (a) Street images. (b) Original MLS point clouds (Missing more than a half). (c) The completion results of PCCNet. (d) The completion results of [8].
Figure 13
Figure 13
More results of MLS data. (a) Street images. (b) Original MLS point clouds. (c) The completion results of PCCNet.

References

    1. Yue X., Wu B., Seshia S.A., Keutzer K., Sangiovanni-Vincentelli A.L. A LiDAR Point Cloud Generator: From a Virtual World to Autonomous Driving; Proceedings of the ACM on International Conference on Multimedia Retrieval; Yokohama, Japan. 11–14 June 2018.
    1. Wu T., Liu J., Li Z., Liu K., Xu B. Accurate Smartphone Indoor Visual Positioning Based on a High-Precision 3D Photorealistic Map. Sensors. 2018;18:1974. doi: 10.3390/s18061974. - DOI - PMC - PubMed
    1. Stets J.D., Sun Y., Corning W., Greenwald S. Visualization and Labeling of Point Clouds in Virtual Reality. arXiv. 2018. 1804.04111
    1. Wu M.L., Chien J.C., Wu C.T., Lee J.D. An Augmented Reality System Using Improved-Iterative Closest Point Algorithm for On-Patient Medical Image Visualization. Sensors. 2018;18:2505. doi: 10.3390/s18082505. - DOI - PMC - PubMed
    1. Balsabarreiro J., Lerma J.L. A new methodology to estimate the discrete-return point density on airborne lidar surveys. Int. J. Remote Sens. 2014;35:1496–1510. doi: 10.1080/01431161.2013.878063. - DOI

LinkOut - more resources