Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Nov 29:9:e1629.
doi: 10.7717/peerj-cs.1629. eCollection 2023.

Understanding the black-box: towards interpretable and reliable deep learning models

Affiliations

Understanding the black-box: towards interpretable and reliable deep learning models

Tehreem Qamar et al. PeerJ Comput Sci. .

Abstract

Deep learning (DL) has revolutionized the field of artificial intelligence by providing sophisticated models across a diverse range of applications, from image and speech recognition to natural language processing and autonomous driving. However, deep learning models are typically black-box models where the reason for predictions is unknown. Consequently, the reliability of the model becomes questionable in many circumstances. Explainable AI (XAI) plays an important role in improving the transparency and interpretability of the model thereby making it more reliable for real-time deployment. To investigate the reliability and truthfulness of DL models, this research develops image classification models using transfer learning mechanism and validates the results using XAI technique. Thus, the contribution of this research is twofold, we employ three pre-trained models VGG16, MobileNetV2 and ResNet50 using multiple transfer learning techniques for a fruit classification task consisting of 131 classes. Next, we inspect the reliability of models, based on these pre-trained networks, by utilizing Local Interpretable Model-Agnostic Explanations, the LIME, a popular XAI technique that generates explanations for the predictions. Experimental results reveal that transfer learning provides optimized results of around 98% accuracy. The classification of the models is validated on different instances using LIME and it was observed that each model predictions are interpretable and understandable as they are based on pertinent image features that are relevant to particular classes. We believe that this research gives an insight for determining how an interpretation can be drawn from a complex AI model such that its accountability and trustworthiness can be increased.

Keywords: Deep learning; Explainable AI; Pre-trained models; Transfer learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare there are no competing interests.

Figures

Figure 1
Figure 1. DL architecture used in this study utilizing VGG16 pre-trained model.
Figure 2
Figure 2. DL architecture used in this study utilizing MobileNetV2 pre-trained model.
Figure 3
Figure 3. DL architecture used in this study utilizing ResNet50 pre-trained model.
Figure 4
Figure 4. Sample images from the Fruits 360 dataset.
Figure 5
Figure 5. Results of frozen layers technique of transfer learning.
Figure 6
Figure 6. Results of fine-tuned layers technique of transfer learning.
Figure 7
Figure 7. Perturbed images.
Figure 8
Figure 8. Top feature selected by each classification model for the chosen instance.

References

    1. Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data. 2021;8(1):53. doi: 10.1186/s40537-021-00444-8. - DOI - PMC - PubMed
    1. Azarmdel H, Jahanbakhshi A, Mohtasebi SS, Rosado Muñoz A. Evaluation of image processing technique as an expert system in mulberry fruit grading based on ripeness level using artificial neural networks (ANNs) and support vector machine (SVM) Postharvest Biology and Technology. 2020;166:111201. doi: 10.1016/j.postharvbio.2020.111201. - DOI
    1. Bhattacharjee S, Hwang YB, Ikromjanov K, Sumon RI, Kim HC, Choi HK. An explainable computer vision in histopathology: techniques for interpreting black box model. 4th international conference on artificial intelligence in information and communication, ICAIIC 2022—proceedings; 2022. pp. 392–398. - DOI
    1. Dastin J. Ethics of data and analytics. Auerbach Publications; 2022. Amazon scraps secret AI recruiting tool that showed bias against women; pp. 296–299.
    1. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition; 2009. pp. 248–255. - DOI

LinkOut - more resources