Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul 12;23(14):6327.
doi: 10.3390/s23146327.

MInet: A Novel Network Model for Point Cloud Processing by Integrating Multi-Modal Information

Affiliations

MInet: A Novel Network Model for Point Cloud Processing by Integrating Multi-Modal Information

Yuhao Wang et al. Sensors (Basel). .

Abstract

Three-dimensional LiDAR systems that capture point cloud data enable the simultaneous acquisition of spatial geometry and multi-wavelength intensity information, thereby paving the way for three-dimensional point cloud recognition and processing. However, due to the irregular distribution, low resolution of point clouds, and limited spatial recognition accuracy in complex environments, inherent errors occur in classifying and segmenting the acquired target information. Conversely, two-dimensional visible light images provide real-color information, enabling the distinction of object contours and fine details, thus yielding clear, high-resolution images when desired. The integration of two-dimensional information with point clouds offers complementary advantages. In this paper, we present the incorporation of two-dimensional information to form a multi-modal representation. From this, we extract local features to establish three-dimensional geometric relationships and two-dimensional color relationships. We introduce a novel network model, termed MInet (Multi-Information net), which effectively captures features relating to both two-dimensional color and three-dimensional pose information. This enhanced network model improves feature saliency, thereby facilitating superior segmentation and recognition tasks. We evaluate our MInet architecture using the ShapeNet and ThreeDMatch datasets for point cloud segmentation, and the Stanford dataset for object recognition. The robust results, coupled with quantitative and qualitative experiments, demonstrate the superior performance of our proposed method in point cloud segmentation and object recognition tasks.

Keywords: LiDAR; multi-modal information; object recognition; point cloud; segmentation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Three-dimensional feature extraction diagram. The SG module is FPS sample and grouping. (For three different scale regions of the same central point, each of the three sets of MLP, the convolution kernel is (32, 64, 128), (64, 128, 256), and (64, 128, 256), and then the three-dimensional features are obtained by concat operation).
Figure 2
Figure 2
Local feature extraction (The local features are obtained by extracting features from three different scales).
Figure 3
Figure 3
Two-dimensional feature extraction diagram (Three different convolution processes were carried out on the input, and the convolution kernels were 16, 32, and 64, respectively. After concat operation was carried out on the results obtained from the convolution processing, three MLP layers with convolution kernels 64, 32, num_class were used to extract two-dimensional information features and obtain the final two-dimensional features).
Figure 4
Figure 4
Object feature extraction (The multivariate features of the target are obtained through concat and MLP layers).
Figure 5
Figure 5
Visualization of segmentation results based on ShapeNet.
Figure 6
Figure 6
Visualization of segmentation results based on 3Dmatch.
Figure 7
Figure 7
Visualization of different methods on S3DIS, from left to right: True value, MInet, Pointnet++.

Similar articles

Cited by

References

    1. Liu Z., Cai Y., Wang H., Chen L., Gao H., Jia Y., Li Y. Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE Trans. Intell. Transp. Syst. 2021;23:6640–6653. doi: 10.1109/TITS.2021.3059674. - DOI
    1. Jiang J., Liu D., Gu J., Süsstrunk S. What is the space of spectral sensitivity functions for digital color cameras?; Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV); Clearwater Beach, FL, USA. 15–17 January 2013; pp. 168–179.
    1. Villa F., Severini F., Madonini F., Zappa F. SPADs and SiPMs arrays for long-range high-speed light detection and ranging (LiDAR) Sensors. 2021;21:3839. doi: 10.3390/s21113839. - DOI - PMC - PubMed
    1. Cheng L., Chen S., Liu X., Xu H., Wu Y., Li M., Chen Y. Registration of laser scanning point clouds: A review. Sensors. 2018;18:1641. doi: 10.3390/s18051641. - DOI - PMC - PubMed
    1. Schumann O., Hahn M., Dickmann J., Wöhler C. Semantic segmentation on radar point clouds; Proceedings of the 2018 21st International Conference on Information Fusion (FUSION); Cambridge, UK. 10–13 July 2018; pp. 2179–2186.

LinkOut - more resources