Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 May 1;115(18):4613-4618.
doi: 10.1073/pnas.1716999115. Epub 2018 Apr 16.

An explainable deep machine vision framework for plant stress phenotyping

Affiliations

An explainable deep machine vision framework for plant stress phenotyping

Sambuddha Ghosal et al. Proc Natl Acad Sci U S A. .

Abstract

Current approaches for accurate identification, classification, and quantification of biotic and abiotic stresses in crop research and production are predominantly visual and require specialized training. However, such techniques are hindered by subjectivity resulting from inter- and intrarater cognitive variability. This translates to erroneous decisions and a significant waste of resources. Here, we demonstrate a machine learning framework's ability to identify and classify a diverse set of foliar stresses in soybean [Glycine max (L.) Merr.] with remarkable accuracy. We also present an explanation mechanism, using the top-K high-resolution feature maps that isolate the visual symptoms used to make predictions. This unsupervised identification of visual symptoms provides a quantitative measure of stress severity, allowing for identification (type of foliar stress), classification (low, medium, or high stress), and quantification (stress severity) in a single framework without detailed symptom annotation by experts. We reliably identified and classified several biotic (bacterial and fungal diseases) and abiotic (chemical injury and nutrient deficiency) stresses by learning from over 25,000 images. The learned model is robust to input image perturbations, demonstrating viability for high-throughput deployment. We also noticed that the learned model appears to be agnostic to species, seemingly demonstrating an ability of transfer learning. The availability of an explainable model that can consistently, rapidly, and accurately identify and quantify foliar stresses would have significant implications in scientific research, plant breeding, and crop production. The trained model could be deployed in mobile platforms (e.g., unmanned air vehicles and automated ground scouts) for rapid, large-scale scouting or as a mobile application for real-time detection of stress by farmers and researchers.

Keywords: explainable deep learning; machine learning; plant stress phenotyping; precision agriculture; resolving rater variabilities.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Fig. 1.
Fig. 1.
Schematic illustration of foliar plant stresses in soybean grouped into two major categories, biotic (bacterial and fungal) and abiotic (nutrient deficiency and chemical injury) stress. The images were used to develop the DCNN for the following eight stresses: bacterial blight (Pseudomonas savastanoi pv. glycinea), bacterial pustule (Xanthomonas axonopodis pv. glycines), sudden death syndrome (SDS, Fusarium virguliforme), Septoria brown spot (Septoria glycines), frogeye leaf spot (Cercospora sojina), IDC, potassium deficiency, and herbicide injury. For each stress, information such as symptom descriptors, areas of appearance, and most commonly mistaken stresses that exhibit similar symptoms are listed. These particular foliar stresses were chosen because of their prevalence and confounding symptoms.
Fig. 2.
Fig. 2.
Overall schematic of the xPlNet framework: (A) DCNN architecture used. (B) Explanation phase. The concept of isolating the top-K high-resolution feature maps learned by the model based on their localized activation levels was applied to automatically visualize important image features used by the DCNN model.
Fig. 3.
Fig. 3.
Leaf image examples for each soybean stress correctly identified by the DCNN model. The unsupervised explanation framework is applied to isolate the regions of interest (symptoms) extracted by the DCNN model, which are highly correlated (spatially) with the symptoms marked manually by expert raters.
Fig. 4.
Fig. 4.
This confusion matrix shows the stress classification results of the DCNN model for eight different stresses and healthy leaves. The overall classification accuracy of the model is 94.13%. The highest confusion among stresses was found among bacterial blight, bacterial pustule, and Septoria brown spot, which can be attributed to the similarities in symptom expression among these stresses.
Fig. 5.
Fig. 5.
Distributions of spatial correlation between human marked symptoms and machine explanations for four stresses: Septoria brown spot, IDC, Herbicide injury, and SDS. As the distributions are significantly skewed toward high-correlation values, they show the success of the DCNN-based severity estimation framework to correctly identify symptoms for these stresses. Shown are a few examples with different severity classes (machine learning explanations on Left, actual images on Right) using a standard discretized severity scale (0% to 25%: resistant; 25% to 50%: moderately resistant; 50% to 75%: susceptible; and 75% to 100%: highly susceptible).

References

    1. Bock C, Poole G, Parker P, Gottwald T. Plant disease severity estimated visually, by digital photography and image analysis, and by hyperspectral imaging. Crit Rev Plant Sci. 2010;29:59–107.
    1. Esteva A, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–118. - PMC - PubMed
    1. Yamins DL, DiCarlo JJ. Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci. 2016;19:356–365. - PubMed
    1. Alipanahi B, Delong A, Weirauch MT, Frey BJ. Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning. Nat Biotechnol. 2015;33:831–838. - PubMed
    1. Mnih V, et al. Human-level control through deep reinforcement learning. Nature. 2015;518:529–533. - PubMed

Publication types