Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Mar 27:16:1491170.
doi: 10.3389/fpls.2025.1491170. eCollection 2025.

Plant stem and leaf segmentation and phenotypic parameter extraction using neural radiance fields and lightweight point cloud segmentation networks

Affiliations

Plant stem and leaf segmentation and phenotypic parameter extraction using neural radiance fields and lightweight point cloud segmentation networks

Gaofei Qiao et al. Front Plant Sci. .

Abstract

High-quality 3D reconstruction and accurate 3D organ segmentation of plants are crucial prerequisites for automatically extracting phenotypic traits. In this study, we first extract a dense point cloud from implicit representations, which derives from reconstructing the maize plants in 3D by using the Nerfacto neural radiance field model. Second, we propose a lightweight point cloud segmentation network (PointSegNet) specifically for stem and leaf segmentation. This network includes a Global-Local Set Abstraction (GLSA) module to integrate local and global features and an Edge-Aware Feature Propagation (EAFP) module to enhance edge-awareness. Experimental results show that our PointSegNet achieves impressive performance compared to five other state-of-the-art deep learning networks, reaching 93.73%, 97.25%, 96.21%, and 96.73% in terms of mean Intersection over Union (mIoU), precision, recall, and F1-score, respectively. Even when dealing with tomato and soybean plants, with complex structures, our PointSegNet also achieves the best metrics. Meanwhile, based on the principal component analysis (PCA), we further optimize the method to obtain the parameters such as leaf length and leaf width by using PCA principal vectors. Finally, the maize stem thickness, stem height, leaf length, and leaf width obtained from our measurements are compared with the manual test results, yielding R 2 values of 0.99, 0.84, 0.94, and 0.87, respectively. These results indicate that our method has high accuracy and reliability for phenotypic parameter extraction. This study throughout the entire process from 3D reconstruction of maize plants to point cloud segmentation and phenotypic parameter extraction, provides a reliable and objective method for acquiring plant phenotypic parameters and will boost plant phenotypic development in smart agriculture.

Keywords: lightweight network; neural radiance fields; plant phenotype; point cloud segmentation; three-dimensional point cloud.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Overview of the proposed framework. (A) Photographs were taken around a maize plant. The bitmaps of these image frames were computed using COLMAP and then converted to the Local Light Field Fusion (LLFF) format. (B) The converted data trains the neural radiance field, which generates dense point clouds. These dense point clouds are then preprocessed. (C) The main stems and leaves of the maize are segmented using PointSegNet. The segmentation effectiveness of the model is verified using various complex plants. (D) Based on the segmentation results of maize plants, four phenotypic traits (stem height, stem thickness, leaf width, and leaf length) were extracted to validate further the point cloud segmentation effect and shape extraction methods.
Figure 2
Figure 2
NeRF-based 3D representation pipeline.
Figure 3
Figure 3
Structural components within the Nerfacto model.
Figure 4
Figure 4
The overall architecture of PointSegNet, a U-net style architecture, has a Global-Local Set Abstraction (GLSA) module for downsampling and an Edge-Aware Feature Propagation (EAFP) module for upsampling. The xN ×3 and fN × d in the Edge-Aware Feature Propagation (EAFP) module are jump connections from the encoder. xN ×3 denotes the spatial location information of the point cloud and fN × d denotes the feature information of the point cloud.
Figure 5
Figure 5
The illustration of Residual Multi-Layer (ResMLP) module.
Figure 6
Figure 6
The illustration of Relative Spatial Attention (RSA) module. where xN ×3 represents the spatial location information of the point cloud and fN × d represents the feature information of the point cloud.
Figure 7
Figure 7
Visualization results of 3D reconstruction using Nerfacto, Colmap open source software and 3DF Zephyr commercial software at different numbers of images and the time taken for reconstruction.
Figure 8
Figure 8
Qualitative visual analysis of organ segmentation using PointSegNet on maize point cloud datasets. (a), (b), (c), (d, e) show the segmentation results and ground truth of maize point clouds for different growth cycles.
Figure 9
Figure 9
Qualitative visual analysis of complex plant organ segmentation using PointSegNet on tomato and soybean point cloud datasets. (a–c) show the segmentation results and ground truth for tomato, while (d) and (e) present the segmentation results and ground truth for soybean.
Figure 10
Figure 10
Visualization flowchart for leaf length and leaf width parameter extraction.
Figure 11
Figure 11
Comparison of extracted phenotypic parameters based on maize point cloud segmentation with measured values. (a) stem height, (b) stem diameter, (c) leaf length, and (d) leaf width.

Similar articles

References

    1. Deng X., Zhang W., Ding Q., Zhang X. (2023). “Pointvector: a vector representation in point cloud analysis,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada. (New York: IEEE; ), 9455–9465.
    1. Du J., Lu X., Fan J., Qin Y., Yang X., Guo X. (2020). Image-based high-throughput detection and phenotype evaluation method for multiple lettuce varieties. Front. Plant Sci. 11, 563386. doi: 10.3389/fpls.2020.563386 - DOI - PMC - PubMed
    1. Furukawa Y., Ponce J. (2009). Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1362–1376. doi: 10.1109/TPAMI.2009.161 - DOI - PubMed
    1. Guo M.-H., Cai J.-X., Liu Z.-N., Mu T.-J., Martin R. R., Hu S.-M. (2021). Pct: Point cloud transformer. Comput. Visual Media 7, 187–199. doi: 10.1007/s41095-021-0229-5 - DOI
    1. Guo Q., Jin S., Li M., Yang Q., Xu K., Ju Y., et al. . (2020). Application of deep learning in ecological resource research: Theories, methods, and challenges. Sci. China Earth Sci. 63, 1457–1474. doi: 10.1007/s11430-019-9584-9 - DOI

LinkOut - more resources