Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Mar 5;21(5):1807.
doi: 10.3390/s21051807.

Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information

Affiliations

Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information

Khadidja Himri et al. Sensors (Basel). .

Abstract

This paper proposes a 3D object recognition method for non-coloured point clouds using point features. The method is intended for application scenarios such as Inspection, Maintenance and Repair (IMR) of industrial sub-sea structures composed of pipes and connecting objects (such as valves, elbows and R-Tee connectors). The recognition algorithm uses a database of partial views of the objects, stored as point clouds, which is available a priori. The recognition pipeline has 5 stages: (1) Plane segmentation, (2) Pipe detection, (3) Semantic Object-segmentation and detection, (4) Feature based Object Recognition and (5) Bayesian estimation. To apply the Bayesian estimation, an object tracking method based on a new Interdistance Joint Compatibility Branch and Bound (IJCBB) algorithm is proposed. The paper studies the recognition performance depending on: (1) the point feature descriptor used, (2) the use (or not) of Bayesian estimation and (3) the inclusion of semantic information about the objects connections. The methods are tested using an experimental dataset containing laser scans and Autonomous Underwater Vehicle (AUV) navigation data. The best results are obtained using the Clustered Viewpoint Feature Histogram (CVFH) descriptor, achieving recognition rates of 51.2%, 68.6% and 90%, respectively, clearly showing the advantages of using the Bayesian estimation (18% increase) and the inclusion of semantic information (21% further increase).

Keywords: 3D object recognition; AUV; Bayesian probabilities; JCBB; autonomous manipulation; global descriptors; inspection; laser scanner; maintenance and repair; multi-object tracking; pipeline detection; point clouds; semantic information; semantic segmentation; underwater environment.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
3D Object Recognition Pipeline.
Figure 2
Figure 2
Ball-valve (top) and Ball-valve-s (bottom) with their respective segmented scan.
Figure 3
Figure 3
Pipes detection: (left) 3D laser scan point cloud; (right) pipes with their respective endpoints.
Figure 4
Figure 4
Pipes Merging: (left) Pipe detection result previous to merging showing, within circles, multiple pipe detections of the same pipe; (right) Result after merging where the multiple detections have been merged into a single one.
Figure 5
Figure 5
Semantic Segmentation: Red points represent the centroids of segmented objects. The red circle shows a segmented object located at an isolated extremity.
Figure 6
Figure 6
Semantic Segmentation: (Left) Input 3D point cloud; (Right) Pipes (blue cylinders) with their endpoints (green spheres), and the centroids of the objects to be segmented (red spheres) along with the segmented objects point clouds (colored). The objects 1, 2, 3, 4 represent respectively: a Ball-Valve, a 3-Way-Valve, an Elbow and a R-Tee.
Figure 7
Figure 7
Interpretation tree stating, for each object ei (level i) its potential associations f1…n, representing the (*) node, a spurious measurement.
Figure 8
Figure 8
Roto translation estimation.
Figure 9
Figure 9
Tracking objects over two consecutive scans, represented in green/red and yellow/blue. The significant displacement between the two scans is the results of navigation inaccuracies from noisy Doppler Velocity Log (DVL) readings in the test pool. The solid lines indicate the objects associated by the tracking.
Figure 10
Figure 10
Confusing Views of the Ball-Valve and 3-Way-Valve objects.
Figure 11
Figure 11
Image of the Girona 500 AUV inspecting the structure. The mapped structure before deployment (a), underwater view of the water tank (b) and online 3D visualizer with a scan of the structure (c).
Figure 12
Figure 12
Mapped object point clouds: (Left) Located at their dead reckoning position; (Right) Located at the position estimated by the tracking using the IJCBB algorithm on the right.
Figure 13
Figure 13
Graphical representation of the Confusion Matrices.
Figure 14
Figure 14
Evaluation of the recognition performance using Accuracy, Recall and Precision for descriptor-based, Bayesian-based and semantic-based method for both: (Top) OUR-CVFH; (Bottom) CVFH.
Figure 15
Figure 15
PVC objects used in the experiment (first column) with their respective database views (second column). The last two columns provide manually selected examples of segmented objects from the experiments, with the most difficult in red and the easiest in blue.

Similar articles

Cited by

References

    1. Zhu Q., Chen L., Li Q., Li M., Nüchter A., Wang J. 3d lidar point cloud based intersection recognition for autonomous driving; Proceedings of the 2012 IEEE Intelligent Vehicles Symposium; Madrid, Spain. 3–7 June 2012; pp. 456–461.
    1. Chen C.S., Chen P.C., Hsu C.M. Three-dimensional object recognition and registration for robotic grasping systems using a modified viewpoint feature histogram. Sensors. 2016;16:1969. doi: 10.3390/s16111969. - DOI - PMC - PubMed
    1. Himri K., Ridao P., Gracias N. 3D Object Recognition Based on Point Clouds in Underwater Environment with Global Descriptors: A Survey. Sensors. 2019;19:4451. doi: 10.3390/s19204451. - DOI - PMC - PubMed
    1. Li D., Wang H., Liu N., Wang X., Xu J. 3D Object Recognition and Pose Estimation From Point Cloud Using Stably Observed Point Pair Feature. IEEE Access. 2020;8:44335–44345. doi: 10.1109/ACCESS.2020.2978255. - DOI
    1. Lee S., Lee D., Choi P., Park D. Accuracy–Power Controllable LiDAR Sensor System with 3D Object Recognition for Autonomous Vehicle. Sensors. 2020;20:5706. doi: 10.3390/s20195706. - DOI - PMC - PubMed

LinkOut - more resources