Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Apr 5;23(7):3751.
doi: 10.3390/s23073751.

Aggregating Different Scales of Attention on Feature Variants for Tomato Leaf Disease Diagnosis from Image Data: A Transformer Driven Study

Affiliations

Aggregating Different Scales of Attention on Feature Variants for Tomato Leaf Disease Diagnosis from Image Data: A Transformer Driven Study

Shahriar Hossain et al. Sensors (Basel). .

Abstract

Tomato leaf diseases can incur significant financial damage by having adverse impacts on crops and, consequently, they are a major concern for tomato growers all over the world. The diseases may come in a variety of forms, caused by environmental stress and various pathogens. An automated approach to detect leaf disease from images would assist farmers to take effective control measures quickly and affordably. Therefore, the proposed study aims to analyze the effects of transformer-based approaches that aggregate different scales of attention on variants of features for the classification of tomato leaf diseases from image data. Four state-of-the-art transformer-based models, namely, External Attention Transformer (EANet), Multi-Axis Vision Transformer (MaxViT), Compact Convolutional Transformers (CCT), and Pyramid Vision Transformer (PVT), are trained and tested on a multiclass tomato disease dataset. The result analysis showcases that MaxViT comfortably outperforms the other three transformer models with 97% overall accuracy, as opposed to the 89% accuracy achieved by EANet, 91% by CCT, and 93% by PVT. MaxViT also achieves a smoother learning curve compared to the other transformers. Afterwards, we further verified the legitimacy of the results on another relatively smaller dataset. Overall, the exhaustive empirical analysis presented in the paper proves that the MaxViT architecture is the most effective transformer model to classify tomato leaf disease, providing the availability of powerful hardware to incorporate the model.

Keywords: CCT; EANet; MaxViT; PVT; attention; tomato leaf disease; transformers.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Sample of the first dataset used in the study.
Figure 2
Figure 2
Sample of the second dataset used in the study.
Figure 3
Figure 3
External-attention for EANet model [31].
Figure 4
Figure 4
MaxViT architecture [32]. Note that this model uses both local and global mechanisms via the maxViT block.
Figure 5
Figure 5
MaxViT block to capture both local and global attention information [32].
Figure 6
Figure 6
Compact Convolutional Transformer (CCT) used as classifier.
Figure 7
Figure 7
PVT architecture [34].
Figure 8
Figure 8
Summary of the methodology.
Figure 9
Figure 9
Accuracy and loss curves for the models.
Figure 10
Figure 10
Classification report.
Figure 11
Figure 11
Confusion matrices.
Figure 12
Figure 12
ROC for each of the models.
Figure 13
Figure 13
Inconsistent pattern in the Septoria Leaf Spot class.
Figure 14
Figure 14
Classification report.
Figure 15
Figure 15
Accuracy and loss curves for the models.

Similar articles

Cited by

References

    1. Kaselimi M., Voulodimos A., Daskalopoulos I., Doulamis N., Doulamis A. A Vision Transformer Model for Convolution-Free Multilabel Classification of Satellite Imagery in Deforestation Monitoring. IEEE Trans. Neural Netw. Learn. Syst. Early Access. 2022:1–9. doi: 10.1109/TNNLS.2022.3144791. - DOI - PubMed
    1. Wang L., Fang S., Meng X., Li R. Building Extraction With Vision Transformer. IEEE Trans. Geosci. Remote Sens. 2022;60:1–11. doi: 10.1109/TGRS.2022.3186634. - DOI
    1. Meng X., Wang N., Shao F., Li S. Vision Transformer for Pansharpening. IEEE Trans. Geosci. Remote Sens. 2022;60:1–11. doi: 10.1109/TGRS.2022.3168465. - DOI
    1. Wang T., Gong L., Wang C., Yang Y., Gao Y., Zhou X., Chen H. ViA: A Novel Vision-Transformer Accelerator Based on FPGA. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2022;41:4088–4099. doi: 10.1109/TCAD.2022.3197489. - DOI
    1. Han K., Wang Y., Chen H., Chen X., Guo J., Liu Z., Tang Y., Xiao A., Xu C., Xu Y., et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2022;45:1. doi: 10.1109/TPAMI.2022.3152247. - DOI - PubMed

LinkOut - more resources