Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct 15;21(20):6861.
doi: 10.3390/s21206861.

Obstacle Detection Using a Facet-Based Representation from 3-D LiDAR Measurements

Affiliations

Obstacle Detection Using a Facet-Based Representation from 3-D LiDAR Measurements

Marius Dulău et al. Sensors (Basel). .

Abstract

In this paper, we propose an obstacle detection approach that uses a facet-based obstacle representation. The approach has three main steps: ground point detection, clustering of obstacle points, and facet extraction. Measurements from a 64-layer LiDAR are used as input. First, ground points are detected and eliminated in order to select obstacle points and create object instances. To determine the objects, obstacle points are grouped using a channel-based clustering approach. For each object instance, its contour is extracted and, using an RANSAC-based approach, the obstacle facets are selected. For each processing stage, optimizations are proposed in order to obtain a better runtime. For the evaluation, we compare our proposed approach with an existing approach, using the KITTI benchmark dataset. The proposed approach has similar or better results for some obstacle categories but a lower computational complexity.

Keywords: LiDAR point cloud; facet representation; object contour; obstacle detection.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Potential application: (ac): Facet-based representation used in automatic emergency braking situations (top view visualization). (a): The gray car detects the black car (red bounding box). (b): The gray car brakes to the rear of the black vehicle with the door open (if the car is detected as the red bounding box). (c): The gray car can perform a smoother braking if the black car is represented by its visible facets (each facet with a different color). (df): A large articulated vehicle cannot be modelled as an oriented cuboid during cornering maneuvers.
Figure 2
Figure 2
System architecture.
Figure 3
Figure 3
Channel values representation in a point cloud (top view). The red area is where the value for X is bigger than the value for Y for the same point; the white area is where the value for X is smaller than the value for Y.
Figure 4
Figure 4
Side view of a channel. Detected ground points are colored with gray. X/Y means either the x-axis or the y-axis value is used, depending on the specific channel orientation.
Figure 5
Figure 5
Ground detection. (a): Results shown in 3-D view. (b): Results overlay over the corresponding camera image.
Figure 6
Figure 6
(a): Object primitive clusters in a channel. (b): Boundary points of a cluster shown in red.
Figure 7
Figure 7
The creation of the intra-channel clusters for a van is presented. Each cluster is represented by a rectangle (delimited by the most extreme points—red dots) in its channel side view. The clusters have different sizes and are intersecting or are close to the previous channel clusters.
Figure 8
Figure 8
Clustering results. (a): Object clusters in point cloud, with distinct color per cluster. (b): Detected clusters projected on the corresponding camera image.
Figure 8
Figure 8
Clustering results. (a): Object clusters in point cloud, with distinct color per cluster. (b): Detected clusters projected on the corresponding camera image.
Figure 9
Figure 9
(a): Top view of a van. (b): Contour obtained from clustering.
Figure 10
Figure 10
Runtime comparison graph for ground detection methods on 252 scenes.
Figure 11
Figure 11
Runtime comparison graph for clustering methods on 252 scenes.
Figure 12
Figure 12
Close multiple objects clustered as one single object. (a): Image with close multiple objects. (b): Single cluster created—point cloud view (same label for all the points).
Figure 13
Figure 13
Facet detection on a van, tram, cyclist, pedestrian, buildings, and a wall. (a): Objects from the point cloud. (b): Contour of object (top view). (c): Facets (with red) over contour. (d): 3-D facets over objects.
Figure 14
Figure 14
Circular fences. (a): Top view. (b): Perspective view. The detected facets are displayed (white).
Figure 15
Figure 15
Facets extracted from the KITTI bounding box (red) and 3-D output facets (yellow) from our implementation are paired for comparison.
Figure 16
Figure 16
Projection of facets (orange and blue) on the KITTI-extracted face (with red).
Figure 17
Figure 17
Runtime comparison graph for facet detection methods on 252 scenes.
Figure 18
Figure 18
Facet representation on a complex scene.

Similar articles

Cited by

References

    1. Li Y., Ibanez-Guzman J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020;37:50–61.
    1. Chen L., Yang J., Kong H. Lidar-histogram for fast road and obstacle detection; Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA); Singapore. 29 May–3 June 2017; pp. 1343–1348.
    1. Chu P., Cho S., Sim S., Kwak K., Cho K. A Fast Ground Segmentation Method for 3D Point Cloud. J. Inf. Process. Syst. 2017;13:491–499.
    1. Asvadi A., Premebida C., Peixoto P., Nunes U. 3D Lidar-based Static and Moving Obstacle Detection in Driving Environments. Robot. Auton. Syst. 2016;83:299–311. doi: 10.1016/j.robot.2016.06.007. - DOI
    1. Zhe C., Zijing C. RBNet: A Deep Neural Network for Unified Road and Road Boundary Detection; Proceedings of the International Conference on Neural Information Processing; Guangzhou, China. 14–18 November 2017; pp. 677–687.

LinkOut - more resources