Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Oct 1;6(10):1-10.
doi: 10.1093/gigascience/gix083.

Deep machine learning provides state-of-the-art performance in image-based plant phenotyping

Affiliations

Deep machine learning provides state-of-the-art performance in image-based plant phenotyping

Michael P Pound et al. Gigascience. .

Erratum in

Abstract

In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.

Keywords: Phenotyping; QTL; deep learning; image analysis; root; shoot.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
A simplified example of a CNN architecture operating on a fixed size image of part of an ear of wheat. The network performs alternating convolution and pooling operations (see the online methods for details). Each convolutional layer automatically extracts useful features, such as edges or corners, outputting a number of feature maps. Pooling operations shrink the size of the feature maps to improve efficiency. The number of feature maps is increased deeper into the network to improve classification accuracy. Finally, standard neural network layers comprise the classification layers, which output probabilities for each class.
Figure 2:
Figure 2:
Example training and validation images from our root tip and shoot feature datasets. Positive samples were taken at locations annotated by a user. Negative samples were generated on the root system and at random for the root images, and on computed feature points on the shoot images.
Figure 3:
Figure 3:
Localization examples. Images showing the response of our classifier using a sliding window over each input image. (a) Three examples of wheat root tip localization. Regions of high response from the classifier are shown in yellow. (b) Two examples of wheat shoot feature localization. Regions of high response from the classifier for leaf tips are highlighted in orange, leaf bases in yellow, ear tips in blue, and ear bases in pink. A portion of the second image has been zoomed and shown with and without features highlighted. More images can be seen in Additional file 1.
Figure 4:
Figure 4:
The architecture of both convolutional neural networks (left: root, right: shoot). In each case, convolution and pooling layers reduce the spatial resolution to 1 × 1, while increasing the feature resolution. All convolutional layers used kernels size 3 × 3 pixels, and the number of different filters is shown at the right of each layer. Following the convolution and pooling layers, the fully connected (neural network) layers perform classification of the images. We included rectified linear unit (ReLu) layers between all convolutional and fully connected layers, and dropout layers between each fully connected layer.

References

    1. Walter A, Liebisch F, Hund A. Plant phenotyping: from bean weighing to image analysis. Plant Methods 2015;11(1):1–11. - PMC - PubMed
    1. Wilf P, Zhang S, Chikkerur S et al. . Computer vision cracks the leaf code. Proc Natl Acad Sci U S A 2016;113(12):3305–10. - PMC - PubMed
    1. Ho. TK. Random decision forests. In: Proceedings of the Third International Conference on Document Analysis and Recognition, vol. 1, 1995. p. 278–82.
    1. Singh A, Ganapathysubramanian B, Singh AK et al. . Machine learning for high-throughput stress phenotyping in plants. Trends Plant Sci 2016;21(2):110–24. - PubMed
    1. Lecun Y, Bottou L, Bengio Y et al. . Gradient-based learning applied to document recognition. Proc IEEE 1998;86(11):2278–324.

Publication types