Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 May 7;21(9):3229.
doi: 10.3390/s21093229.

PPTFH: Robust Local Descriptor Based on Point-Pair Transformation Features for 3D Surface Matching

Affiliations

PPTFH: Robust Local Descriptor Based on Point-Pair Transformation Features for 3D Surface Matching

Lang Wu et al. Sensors (Basel). .

Abstract

Three-dimensional feature description for a local surface is a core technology in 3D computer vision. Existing descriptors perform poorly in terms of distinctiveness and robustness owing to noise, mesh decimation, clutter, and occlusion in real scenes. In this paper, we propose a 3D local surface descriptor using point-pair transformation feature histograms (PPTFHs) to address these challenges. The generation process of the PPTFH descriptor consists of three steps. First, a simple but efficient strategy is introduced to partition the point-pair sets on the local surface into four subsets. Then, three feature histograms corresponding to each point-pair subset are generated by the point-pair transformation features, which are computed using the proposed Darboux frame. Finally, all the feature histograms of the four subsets are concatenated into a vector to generate the overall PPTFH descriptor. The performance of the PPTFH descriptor is evaluated on several popular benchmark datasets, and the results demonstrate that the PPTFH descriptor achieves superior performance in terms of descriptiveness and robustness compared with state-of-the-art algorithms. The benefits of the PPTFH descriptor for 3D surface matching are demonstrated by the results obtained from five benchmark datasets.

Keywords: 3D registration; 3D surface matching; local surface descriptor; object recognition.

PubMed Disclaimer

Conflict of interest statement

All authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Partition of the point-pair sets. (a) All point-pairs in the key point neighborhood. (b) The partition feature δ of the point-pair pi,pj. (c) The four point-pair subsets based on the feature δ.
Figure 2
Figure 2
Generation of a point-pair transformation matrix. (a) Definition of proposed Darboux frame. (b) Definition of source point and target point. (c) Computation of point-pair transformation matrix.
Figure 3
Figure 3
Examples on tuning dataset. (a) The examples of some models. (b) The examples of some scenes with 0.5 mr Gaussian noise and resampling 1/4 of model resolution.
Figure 4
Figure 4
Different computing methods of the point-pair transformation matrix. (a) Proposed method. (b) Method-1. (c) Method-2.
Figure 5
Figure 5
RPC and AUCpr results with different parameter configurations. (a) Different methods for computing point-pair transformation matrix, and the other parameters are separately set as r=15 mr, Nσ=4, Na=5, Nd=7 (the values in parentheses are the AUCpr results). (b) Different point-pair set partition numbers. (c) Different bin numbers of two types of features.
Figure 6
Figure 6
Examples of 4 models and scenes on 4 datasets. (a) Bologna dataset. (b) UWA dataset. (c) SDSR dataset. (d) Kinect dataset.
Figure 7
Figure 7
RPC results on 4 different application datasets (AUCpr values are exhibited in parentheses). (a) Bologna dataset with 0.5 mr noise and 1/4 downsample. (b) UWA dataset with 1/4 downsample. (c) SDSR dataset for registration. (d) Kinect dataset.
Figure 8
Figure 8
AUCpr results on different nuisances. (a) Bologna dataset with different noise levels and 1/4 downsample. (b) Bologna dataset with mesh resolution variation. (c) UWA dataset for different clutter rates. (d) UWA dataset for different occlusion rates.
Figure 9
Figure 9
Evaluation results of compactness and time efficiency. (a) Compactness of all compared methods. (b) Time consumption of all compared methods, and the y axis is shown logarithmically for clarity.
Figure 10
Figure 10
Surface matching performances (F1 scores) under different support radiuses for different datasets. The position where the marker for solid color filling locates represents the support radius and F1 scores when the performance of the descriptor achieves its best, and the value in parentheses represents the highest F1 scores of each local descriptor. (Figure best seen in color).
Figure 10
Figure 10
Surface matching performances (F1 scores) under different support radiuses for different datasets. The position where the marker for solid color filling locates represents the support radius and F1 scores when the performance of the descriptor achieves its best, and the value in parentheses represents the highest F1 scores of each local descriptor. (Figure best seen in color).
Figure 11
Figure 11
Sample visual registration results by our PPTFH descriptor on BR, UWA, SDSR, and Kinect datasets. From left to right: model point cloud (dark green color); scene point cloud (red color); correspondences by PPTFH+NNSR technology; correspondences by GC technology; and the registration result by RANSAC method.
Figure 12
Figure 12
Sample visual registration results by our PPTFH descriptor on the WHU-TLS dataset. From left to right: model point cloud (dark green color); scene point cloud (red color); correspondences by PPTFH+NNSR technology; correspondences by GC technology; and the registration results by RANSAC method.

References

    1. Petrelli A., Di Stefano L. Pairwise Registration by Local Orientation Cues: Pairwise Registration by Local Orientation Cues. Comput. Graph. Forum. 2016;35:59–72. doi: 10.1111/cgf.12732. - DOI
    1. Guo Y., Sohel F., Bennamoun M., Wan J., Lu M. An Accurate and Robust Range Image Registration Algorithm for 3D Object Modeling. IEEE Trans. Multimed. 2014;16:1377–1390. doi: 10.1109/TMM.2014.2316145. - DOI
    1. Dong Z., Liang F., Yang B., Xu Y., Zang Y., Li J., Wang Y., Dai W., Fan H., Hyyppä J., et al. Registration of Large-Scale Terrestrial Laser Scanner Point Clouds: A Review and Benchmark. ISPRS J. Photogramm. Remote Sens. 2020;163:327–342. doi: 10.1016/j.isprsjprs.2020.03.013. - DOI
    1. Dong Z., Yang B., Liang F., Huang R., Scherer S. Hierarchical Registration of Unordered TLS Point Clouds Based on Binary Shape Context Descriptor. ISPRS J. Photogramm. Remote Sens. 2018;144:61–79. doi: 10.1016/j.isprsjprs.2018.06.018. - DOI
    1. Cheng X., Li Z., Zhong K., Shi Y. An Automatic and Robust Point Cloud Registration Framework Based on View-Invariant Local Feature Descriptors and Transformation Consistency Verification. Opt. Lasers Eng. 2017;98:37–45. doi: 10.1016/j.optlaseng.2017.05.011. - DOI

LinkOut - more resources