Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug 4;22(15):5821.
doi: 10.3390/s22155821.

In-Field Automatic Identification of Pomegranates Using a Farmer Robot

Affiliations

In-Field Automatic Identification of Pomegranates Using a Farmer Robot

Rosa Pia Devanna et al. Sensors (Basel). .

Abstract

Ground vehicles equipped with vision-based perception systems can provide a rich source of information for precision agriculture tasks in orchards, including fruit detection and counting, phenotyping, plant growth and health monitoring. This paper presents a semi-supervised deep learning framework for automatic pomegranate detection using a farmer robot equipped with a consumer-grade camera. In contrast to standard deep-learning methods that require time-consuming and labor-intensive image labeling, the proposed system relies on a novel multi-stage transfer learning approach, whereby a pre-trained network is fine-tuned for the target task using images of fruits in controlled conditions, and then it is progressively extended to more complex scenarios towards accurate and efficient segmentation of field images. Results of experimental tests, performed in a commercial pomegranate orchard in southern Italy, are presented using the DeepLabv3+ (Resnet18) architecture, and they are compared with those that were obtained based on conventional manual image annotation. The proposed framework allows for accurate segmentation results, achieving an F1-score of 86.42% and IoU of 97.94%, while relieving the burden of manual labeling.

Keywords: agricultural robotics; deep learning; fruit detection; multi-stage transfer learning; precision farming.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The farmer robot, Polibot, used for experimentation, equipped with a multi-sensor suite.
Figure 2
Figure 2
Sample images from the different data sets: controlled environment acquisitions under uniform (left) and non-uniform (center) exposure; field images acquired by the robotic platform (right).
Figure 3
Figure 3
Pipeline of the image segmentation approach.
Figure 4
Figure 4
Block diagram of the DeepLabv3+ architecture using a set of 18 consecutive convolutional layers.
Figure 5
Figure 5
Sample image from SET1 (left) and the corresponding labeled image obtained by color threshold and morphological operations (right).
Figure 6
Figure 6
Data augmentation for the sample image of Figure 5. Twenty images are obtained by rotating, reflecting and varying contrast and exposure.
Figure 7
Figure 7
Four segmented images of SET2, in which each pixel recognized as a fruit by the First CE net is blue, while those in the background are cyan.
Figure 8
Figure 8
Results of labeling for a sample image of SET2: original image (left); labeled image obtained using the First CE net before (center) andafter morphological operations (right).
Figure 9
Figure 9
Final CE net result for a sample image acquired in the field. (Left): original image; (right): segmentation result.
Figure 10
Figure 10
True negative images acquired in a vineyard used for TNA training.
Figure 11
Figure 11
TNA result for the same image as Figure 9. (Left): original image; (right): segmentation result.
Figure 12
Figure 12
FIA result for the same image as Figure 9. (Left): original image; (right): segmentation result.
Figure 13
Figure 13
Comparison among segmentation results obtained from the three different approaches. White pixels represent true positives, black pixels represent true negatives, pink pixels represent false negatives and green pixels represent false positives.
Figure 14
Figure 14
Confusion matrices for the manual labeling network (left) and the FIA network (right).

References

    1. Reina G., Milella A., Rouveure R., Nielsen M., Worst R., Blas M.R. Ambient awareness for agricultural robotic vehicles. Biosyst. Eng. 2016;146:114–132. doi: 10.1016/j.biosystemseng.2015.12.010. - DOI
    1. Bargoti S., Underwood J.P. Image Segmentation for Fruit Detection and Yield Estimation in Apple Orchards. J. Field Robot. 2017;34:1039–1060. doi: 10.1002/rob.21699. - DOI
    1. Arivazhagan S., Shebiah R.N., Nidhyanandhan S.S., Ganesan L. Fruit Recognition Using Color and Texture Features. [(accessed on 17 June 2022)];J. Emerg. Trends Comput. Inf. Sci. 2010 1:90–94. Available online: http://www.cisjournal.org.
    1. Zhao J., Tow J., Katupitiya J. On-Tree Fruit Recognition Using Texture Properties and Color Data; Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems; Edmonton, AB, Canada. 2–6 August 2005; IROS 263–268.
    1. Jana S., Parekh R. Shape-Based Fruit Recognition and Classification. [(accessed on 17 June 2022)];Commun. Comput. Inf. Sci. 2017 776:184–196. Available online: https://link.springer.com/chapter/10.1007/978-981-10-6430-2_15. - DOI

LinkOut - more resources