Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 29:15:1358360.
doi: 10.3389/fpls.2024.1358360. eCollection 2024.

Advanced deep learning models for phenotypic trait extraction and cultivar classification in lychee using photon-counting micro-CT imaging

Affiliations

Advanced deep learning models for phenotypic trait extraction and cultivar classification in lychee using photon-counting micro-CT imaging

Mengjia Xue et al. Front Plant Sci. .

Abstract

Introduction: In contemporary agronomic research, the focus has increasingly shifted towards non-destructive imaging and precise phenotypic characterization. A photon-counting micro-CT system has been developed, which is capable of imaging lychee fruit at the micrometer level and capturing a full energy spectrum, thanks to its advanced photon-counting detectors.

Methods: For automatic measurement of phenotypic traits, seven CNN-based deep learning models including AttentionUNet, DeeplabV3+, SegNet, TransUNet, UNet, UNet++, and UNet3+ were developed. Machine learning techniques tailored for small-sample training were employed to identify key characteristics of various lychee species.

Results: These models demonstrate outstanding performance with Dice, Recall, and Precision indices predominantly ranging between 0.90 and 0.99. The Mean Intersection over Union (MIoU) consistently falls between 0.88 and 0.98. This approach served both as a feature selection process and a means of classification, significantly enhancing the study's ability to discern and categorize distinct lychee varieties.

Discussion: This research not only contributes to the advancement of non-destructive plant analysis but also opens new avenues for exploring the intricate phenotypic variations within plant species.

Keywords: deep learning; lychee phenotypic traits; micro-CT; non-destructive; plant phenomics.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Four morphologically representative lychee samples. (A) RGB images of lychees. (B) 3D NURBS model of lychee constructed from Micro-CT. (C) Cross-sectional view of lychee. (D) Tomogram images of lychee cross-section from Micro-CT.
Figure 2
Figure 2
Photon-counting micro-CT scanner design. By rotational scanning the lychee samples, the MFX and the photon-counting detectors transmit data to the Console PC, which then generates CT images through pre-processing and reconstruction steps.
Figure 3
Figure 3
Multi-energy spectral analysis based on photon-counting detectors. (A) The left column features three sub-figures, each reconstructed using different energy spans. (B) The violin plot with a box plot on the right side were used to reveal the distribution of average Hounsfield Unit (HU) values in lychee pulp tissues.
Figure 4
Figure 4
Segmentation results. The images presented in the figure are from the segmentation results of the same sample in the test set trained by different models. The first column acts as a reference, presenting the manual segmentation of the lychee fruit’s components, namely the kernel, pulp, endocarp, and epicarp. The following columns exhibit the segmentation outcomes from different models, including AttentionUNet, DeepLabV3+, among others. These segments are highlighted with color-coded outlines, facilitating a swift comparison of each model’s precision in relation to the manual segmentation.
Figure 5
Figure 5
Performance comparison of deep learning segmentation models. The initial five subfigures display the performance of seven CNN-based models in terms of MIoU, Dice, Recall, and Precision indices, specifically when segmenting different parts of the lychees, including the Kernel, Pulp, Endocarp, Epicarp, and Background. The final sub-figure provides an aggregated view of the average performance for each model.
Figure 6
Figure 6
Comparison between manual and auto measurements for (A) fruit features and (B) kernel features. The x-axis represents the manual measurements of these morphological traits, which are considered the gold standard in our analysis. The y-axis displays the corresponding automatic measurements derived from our image segmentation algorithms. Each point on the scatter plots represents an individual measurement, with different colors symbolizing different traits: skin blue for length, dark blue for width, and green for height.
Figure 7
Figure 7
Feature analysis. The left y-axis of the figure shows a histogram ranking these traits based on their significance by a Random Forest Classifier. The right y-axis is for the two lines within a shaded area. The orange line charts the Kendall Correlation Coefficient to the species variables. The blue line represents the average value of each trait after applying MinMax scaling, surrounded by the blue standard deviation zone.
Figure 8
Figure 8
Pearson correlation matrix triangular heatmap. Each cell within the heatmap provides the correlation coefficient between two attributes, with the color intensity and direction (green to blue) indicating the strength and type of the relationship.
Figure 9
Figure 9
Importance of trait features in different lychee varieties. Each subplot represents a particular lychee variety, with the vertical axis indicating the level of importance for each trait, and the horizontal axis representing different feature types.
Figure 10
Figure 10
(A) Accuracy of classifiers with stratified CV method, which illustrates the accuracy of classifiers using the Stratified CV method across different splits. (B) Average accuracy of classifiers with and without CV methods, where the sky-blue bars indicate the accuracy of models trained without CV methods, serving as a baseline for comparison.

References

    1. Alqazzaz S., Sun X., Yang X., Nokes L. (2019). Automated brain tumor segmentation on multi-modal mr image using segnet. Comput. Visual Media 5, 209–219. doi: 10.1007/s41095-019-0139-y - DOI
    1. Begot L., Slavkovic F., Oger M., Pichot C., Morin H., Boualem A., et al. . (2022). Precision phenotyping of nectar-related traits using x-ray micro computed tomography. Cells 11, 3452. doi: 10.3390/cells11213452 - DOI - PMC - PubMed
    1. Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., et al. . (2021). Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint. [Preprint]. Available at: https://arxiv.org/abs/2102.04306.
    1. da Cruz L. B., Júnior D. A. D., Diniz J. O. B., Silva A. C., de Almeida J. D. S., de Paiva A. C., et al. . (2022). Kidney tumor segmentation from computed tomography images using deeplabv3 + 2.5 d model. Expert Syst. Appl. 192, 116270. doi: 10.1016/j.eswa.2021.116270 - DOI
    1. Huang H., Lin L., Tong R., Hu H., Zhang Q., Iwamoto Y., et al. . (2020). Unet 3+: A full-scale connected unet for medical image segmentation. IEEE Xplore. ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). 2020 May 4-8. Barcelona, Spain: IEEE, 1055–1059.

LinkOut - more resources