Advanced deep learning models for phenotypic trait extraction and cultivar classification in lychee using photon-counting micro-CT imaging
- PMID: 38486848
- PMCID: PMC10937343
- DOI: 10.3389/fpls.2024.1358360
Advanced deep learning models for phenotypic trait extraction and cultivar classification in lychee using photon-counting micro-CT imaging
Abstract
Introduction: In contemporary agronomic research, the focus has increasingly shifted towards non-destructive imaging and precise phenotypic characterization. A photon-counting micro-CT system has been developed, which is capable of imaging lychee fruit at the micrometer level and capturing a full energy spectrum, thanks to its advanced photon-counting detectors.
Methods: For automatic measurement of phenotypic traits, seven CNN-based deep learning models including AttentionUNet, DeeplabV3+, SegNet, TransUNet, UNet, UNet++, and UNet3+ were developed. Machine learning techniques tailored for small-sample training were employed to identify key characteristics of various lychee species.
Results: These models demonstrate outstanding performance with Dice, Recall, and Precision indices predominantly ranging between 0.90 and 0.99. The Mean Intersection over Union (MIoU) consistently falls between 0.88 and 0.98. This approach served both as a feature selection process and a means of classification, significantly enhancing the study's ability to discern and categorize distinct lychee varieties.
Discussion: This research not only contributes to the advancement of non-destructive plant analysis but also opens new avenues for exploring the intricate phenotypic variations within plant species.
Keywords: deep learning; lychee phenotypic traits; micro-CT; non-destructive; plant phenomics.
Copyright © 2024 Xue, Huang, Xu and Xie.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures










References
-
- Alqazzaz S., Sun X., Yang X., Nokes L. (2019). Automated brain tumor segmentation on multi-modal mr image using segnet. Comput. Visual Media 5, 209–219. doi: 10.1007/s41095-019-0139-y - DOI
-
- Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., et al. . (2021). Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint. [Preprint]. Available at: https://arxiv.org/abs/2102.04306.
-
- da Cruz L. B., Júnior D. A. D., Diniz J. O. B., Silva A. C., de Almeida J. D. S., de Paiva A. C., et al. . (2022). Kidney tumor segmentation from computed tomography images using deeplabv3 + 2.5 d model. Expert Syst. Appl. 192, 116270. doi: 10.1016/j.eswa.2021.116270 - DOI
-
- Huang H., Lin L., Tong R., Hu H., Zhang Q., Iwamoto Y., et al. . (2020). Unet 3+: A full-scale connected unet for medical image segmentation. IEEE Xplore. ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). 2020 May 4-8. Barcelona, Spain: IEEE, 1055–1059.
LinkOut - more resources
Full Text Sources