Efficient detection of eyes on potato tubers using deep-learning for robotic high-throughput sampling
- PMID: 39748820
- PMCID: PMC11693691
- DOI: 10.3389/fpls.2024.1512632
Efficient detection of eyes on potato tubers using deep-learning for robotic high-throughput sampling
Abstract
Molecular-based detection of pathogens from potato tubers hold promise, but the initial sample extraction process is labor-intensive. Developing a robotic tuber sampling system, equipped with a fast and precise machine vision technique to identify optimal sampling locations on a potato tuber, offers a viable solution. However, detecting sampling locations such as eyes and stolon scar is challenging due to variability in their appearance, size, and shape, along with soil adhering to the tubers. In this study, we addressed these challenges by evaluating various deep-learning-based object detectors, encompassing You Look Only Once (YOLO) variants of YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10, and YOLO11, for detecting eyes and stolon scars across a range of diverse potato cultivars. A robust image dataset obtained from tubers of five potato cultivars (three russet skinned, a red skinned, and a purple skinned) was developed as a benchmark for detection of these sampling locations. The mean average precision at an intersection over union threshold of 0.5 (mAP@0.5) ranged from 0.832 and 0.854 with YOLOv5n to 0.903 and 0.914 with YOLOv10l. Among all the tested models, YOLOv10m showed the optimal trade-off between detection accuracy (mAP@0.5 of 0.911) and inference time (92 ms), along with satisfactory generalization performance when cross-validated among the cultivars used in this study. The model benchmarking and inferences of this study provide insights for advancing the development of a robotic potato tuber sampling device.
Keywords: FTA card; YOLO; machine vision; molecular diagnostics; potato pathogens; tissue sampling robot.
Copyright © 2024 Divyanth, Khanal, Paudel, Mattupalli and Karkee.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures








Similar articles
-
Enhancing Grapevine Node Detection to Support Pruning Automation: Leveraging State-of-the-Art YOLO Detection Models for 2D Image Analysis.Sensors (Basel). 2024 Oct 22;24(21):6774. doi: 10.3390/s24216774. Sensors (Basel). 2024. PMID: 39517671 Free PMC article.
-
YOLOPears: a novel benchmark of YOLO object detectors for multi-class pear surface defect detection in quality grading systems.Front Plant Sci. 2025 Feb 17;16:1483824. doi: 10.3389/fpls.2025.1483824. eCollection 2025. Front Plant Sci. 2025. PMID: 40034151 Free PMC article.
-
Advancing e-waste classification with customizable YOLO based deep learning models.Sci Rep. 2025 May 25;15(1):18151. doi: 10.1038/s41598-025-94772-x. Sci Rep. 2025. PMID: 40415121 Free PMC article.
-
A robust potato tuber tissue collection method to investigate potato virus Y, potato mop-top virus, and tobacco rattle virus localization patterns.Plant Dis. 2025 Apr 28. doi: 10.1094/PDIS-11-24-2453-RE. Online ahead of print. Plant Dis. 2025. PMID: 40296283
-
Deep learning approach for detecting tomato flowers and buds in greenhouses on 3P2R gantry robot.Sci Rep. 2024 Sep 4;14(1):20552. doi: 10.1038/s41598-024-71013-1. Sci Rep. 2024. PMID: 39232065 Free PMC article.
Cited by
-
Deep learning-based text generation for plant phenotyping and precision agriculture.Front Plant Sci. 2025 Jun 4;16:1564394. doi: 10.3389/fpls.2025.1564394. eCollection 2025. Front Plant Sci. 2025. PMID: 40535926 Free PMC article.
References
-
- Bhatti U. A., Yu Z., Chanussot J., Zeeshan Z., Yuan L., Luo W., et al. . (2022). Local similarity-based spatial–spectral fusion hyperspectral image classification with deep CNN and gabor filtering. IEEE Trans. Geosci. Remote Sens. 60, 1–15. doi: 10.1109/TGRS.2021.3090410 - DOI
-
- Bochkovskiy A., Wang C.-Y., Liao H.-Y. M. (2020).YOLOv4: Optimal speed and accuracy of object detection. Arxiv. Available online at: https://arxiv.org/abs/2004.10934v1 (Accessed December 31, 2023).
-
- Carranza-García M., Torres-Mateo J., Lara-Benítez P., García-Gutiérrez J. (2020). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sens. 13, 89. doi: 10.3390/RS13010089 - DOI
-
- Dang F., Chen D., Lu Y., Li Z. (2023). YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems. Comput. Electron. Agric. 205, 107655. doi: 10.1016/J.COMPAG.2023.107655 - DOI
LinkOut - more resources
Full Text Sources