Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 19;12(10):2035.
doi: 10.3390/plants12102035.

Machine Learning Methods for Automatic Segmentation of Images of Field- and Glasshouse-Based Plants for High-Throughput Phenotyping

Affiliations

Machine Learning Methods for Automatic Segmentation of Images of Field- and Glasshouse-Based Plants for High-Throughput Phenotyping

Frank Gyan Okyere et al. Plants (Basel). .

Abstract

Image segmentation is a fundamental but critical step for achieving automated high- throughput phenotyping. While conventional segmentation methods perform well in homogenous environments, the performance decreases when used in more complex environments. This study aimed to develop a fast and robust neural-network-based segmentation tool to phenotype plants in both field and glasshouse environments in a high-throughput manner. Digital images of cowpea (from glasshouse) and wheat (from field) with different nutrient supplies across their full growth cycle were acquired. Image patches from 20 randomly selected images from the acquired dataset were transformed from their original RGB format to multiple color spaces. The pixels in the patches were annotated as foreground and background with a pixel having a feature vector of 24 color properties. A feature selection technique was applied to choose the sensitive features, which were used to train a multilayer perceptron network (MLP) and two other traditional machine learning models: support vector machines (SVMs) and random forest (RF). The performance of these models, together with two standard color-index segmentation techniques (excess green (ExG) and excess green-red (ExGR)), was compared. The proposed method outperformed the other methods in producing quality segmented images with over 98%-pixel classification accuracy. Regression models developed from the different segmentation methods to predict Soil Plant Analysis Development (SPAD) values of cowpea and wheat showed that images from the proposed MLP method produced models with high predictive power and accuracy comparably. This method will be an essential tool for the development of a data analysis pipeline for high-throughput plant phenotyping. The proposed technique is capable of learning from different environmental conditions, with a high level of robustness.

Keywords: feature extraction; imaging; machine learning; phenotyping; segmentation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Classification accuracy scores for the three models (multilayer perceptron (MLP), random forest (RF), and support vector machines (SVMs) based on selected features dataset (SF) and all features (AF) dataset for training.
Figure 2
Figure 2
Examples of glasshouse and field segmented plants using the proposed method and selected segmentation methods. (a) Original wheat image, (b) ExG wheat segmented image, (c) proposed method (MLP) segmented image, (d) original cowpea image, (e) ExG cowpea segmented image, and (f) MLP segmented image.
Figure 3
Figure 3
Mean segmentation accuracy rate comparison for quality assessment of five different segmentation methods for glasshouse-based images; Qseg measures the segmentation consistency on a pixel-by-pixel basis, Sr measures the consistency of plant pixels between image regions, and Es measures the rate of pixel misclassification. These were applied on the five segmentation methods: multilayer perceptron (MLP), support vector machines (SVMs), random forest (RF), excess green (ExG), and excess green–red (ExGR).
Figure 4
Figure 4
Comparison of segmentation accuracy rate (Qseg-, Sr, Es) for quality assessment of five different segmentation methods; multilayer perceptron (MLP), support vector machines (SVMs), random forest (RF), excess green (ExG), and excess green–red (ExGR) for field-based images. Qseg measures the segmentation consistency on a pixel-by-pixel basis, Sr measures the consistency of plant pixels between image regions, and Es measures the rate of pixel misclassification. These were applied to the five segmentation methods.
Figure 5
Figure 5
Scatterplots of predicted and observed SPAD values for the glasshouse plants. MLP-GR, RF-GR, SVM-GR, ExGR-GR, and ExG-GR represent the glasshouse-based regression models for the multilayer perceptron, random forest, support vector machine, excess green, and excess green–red segmentation methods, respectively.
Figure 6
Figure 6
Scatterplots of predicted and observed SPAD values for the field plants. MLP-FR, RF-FR, SVM-FR, ExGR-FR, and ExG-FR are the field-based regression models for the multilayer perceptron, random forest, support vector machine, excess green, and excess green–red segmentation methods, respectively.
Figure 7
Figure 7
Diagrammatic representation of proposed method for glasshouse and field image, using MLP, multilayer perceptron.
Figure 8
Figure 8
Annotation of images into foreground and background patches for feature extraction. (a) Wheat image annotation and (b) Cowpea image annotation. The FG represents the foreground annotation, and the BG is the background annotation.
Figure 9
Figure 9
Feature-selection process involving correlation analysis and feature ranking based on importance score. (a) is the heatmap of all extracted features and (b) is a plot of feature importance for feature selection. Abbreviations: RGB (R = red, G = green and B = blue channels); HSV (H = hue, S = saturation, and V = value); ybr (y = luma, b = blue component, and r = red component); Lab (L = lightness, and a and b = chromaticity); YUV (Y = luma or brightness, U = blue projection, and V = red projection); Luv (L = luminance, u = blue axis, and v = red axis); hls (h = hue, l = lightness, and s = saturation); and XYZ (X and Z = spectral weighting curves, and Y = luminance).
Figure 10
Figure 10
Examples of images obtained from the glasshouse and field, segmented as reference images. (a) Original image, (b) manually segmented image, and (c) binary image. Reference images were randomly selected from the dataset with different nutrient content and illumination and at variable growth stages.

References

    1. Lee U., Chang S., Putra G.A., Kim H., Kim D.H. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis. PLoS ONE. 2018;13:e0196615. doi: 10.1371/journal.pone.0196615. - DOI - PMC - PubMed
    1. Sugiura R., Tsuda S., Tsuji H., Murakami N. Virus-infected plant detection in potato seed production field by UAV imagery; Proceedings of the 2018 ASABE Annual International Meeting; Detroit, MI, USA. 29 July–1 August 2018.
    1. Sadeghi-Tehran P., Virlet N., Sabermanesh K., Hawkesford M.J. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping. Plant Methods. 2017;13:103. doi: 10.1186/s13007-017-0253-8. - DOI - PMC - PubMed
    1. Li L., Zhang Q., Huang D. A Review of Imaging Techniques for Plant Phenotyping. Sensors. 2014;14:20078–20111. doi: 10.3390/s141120078. - DOI - PMC - PubMed
    1. Hamuda E., Glavin M., Jones E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016;125:184–199. doi: 10.1016/j.compag.2016.04.024. - DOI

LinkOut - more resources