DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
- PMID: 37897989
- DOI: 10.1016/j.cmpb.2023.107879
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
Abstract
Background and objective: Artificial intelligence (AI) has several uses in the healthcare industry, some of which include healthcare management, medical forecasting, practical making of decisions, and diagnosis. AI technologies have reached human-like performance, but their use is limited since they are still largely viewed as opaque black boxes. This distrust remains the primary factor for their limited real application, particularly in healthcare. As a result, there is a need for interpretable predictors that provide better predictions and also explain their predictions.
Methods: This study introduces "DeepXplainer", a new interpretable hybrid deep learning-based technique for detecting lung cancer and providing explanations of the predictions. This technique is based on a convolutional neural network and XGBoost. XGBoost is used for class label prediction after "DeepXplainer" has automatically learned the features of the input using its many convolutional layers. For providing explanations or explainability of the predictions, an explainable artificial intelligence method known as "SHAP" is implemented.
Results: The open-source "Survey Lung Cancer" dataset was processed using this method. On multiple parameters, including accuracy, sensitivity, F1-score, etc., the proposed method outperformed the existing methods. The proposed method obtained an accuracy of 97.43%, a sensitivity of 98.71%, and an F1-score of 98.08. After the model has made predictions with this high degree of accuracy, each prediction is explained by implementing an explainable artificial intelligence method at both the local and global levels.
Conclusions: A deep learning-based classification model for lung cancer is proposed with three primary components: one for feature learning, another for classification, and a third for providing explanations for the predictions made by the proposed hybrid (ConvXGB) model. The proposed "DeepXplainer" has been evaluated using a variety of metrics, and the results demonstrate that it outperforms the current benchmarks. Providing explanations for the predictions, the proposed approach may help doctors in detecting and treating lung cancer patients more effectively.
Keywords: Artificial intelligence; Deep learning; Explainable artificial intelligence (XAI); Lung cancer; SHAP; Smart healthcare systems.
Copyright © 2023 Elsevier B.V. All rights reserved.
Conflict of interest statement
Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Similar articles
-
Enhanced joint hybrid deep neural network explainable artificial intelligence model for 1-hr ahead solar ultraviolet index prediction.Comput Methods Programs Biomed. 2023 Nov;241:107737. doi: 10.1016/j.cmpb.2023.107737. Epub 2023 Aug 5. Comput Methods Programs Biomed. 2023. PMID: 37573641
-
A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.J Neurosci Methods. 2024 Aug;408:110159. doi: 10.1016/j.jneumeth.2024.110159. Epub 2024 May 7. J Neurosci Methods. 2024. PMID: 38723868
-
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030. J Med Internet Res. 2025. PMID: 40306634 Free PMC article.
-
Explainable AI for Bioinformatics: Methods, Tools and Applications.Brief Bioinform. 2023 Sep 20;24(5):bbad236. doi: 10.1093/bib/bbad236. Brief Bioinform. 2023. PMID: 37478371 Review.
-
Explainability and white box in drug discovery.Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27. Chem Biol Drug Des. 2023. PMID: 37105727 Review.
Cited by
-
AI-driven analysis by identifying risk factors of VL relapse in HIV co-infected patients.Sci Rep. 2025 Jul 1;15(1):21067. doi: 10.1038/s41598-025-07406-7. Sci Rep. 2025. PMID: 40596278 Free PMC article.
-
Addressing cross-population domain shift in chest X-ray classification through supervised adversarial domain adaptation.Sci Rep. 2025 Apr 3;15(1):11383. doi: 10.1038/s41598-025-95390-3. Sci Rep. 2025. PMID: 40181036 Free PMC article.
-
Exploring the clinical value of concept-based AI explanations in gastrointestinal disease detection.Sci Rep. 2025 Aug 7;15(1):28860. doi: 10.1038/s41598-025-14408-y. Sci Rep. 2025. PMID: 40775463 Free PMC article.
-
Explainable lung cancer classification with ensemble transfer learning of VGG16, Resnet50 and InceptionV3 using grad-cam.BMC Med Imaging. 2024 Jul 19;24(1):176. doi: 10.1186/s12880-024-01345-x. BMC Med Imaging. 2024. PMID: 39030496 Free PMC article.
-
CNN-TumorNet: leveraging explainability in deep learning for precise brain tumor diagnosis on MRI images.Front Oncol. 2025 Mar 26;15:1554559. doi: 10.3389/fonc.2025.1554559. eCollection 2025. Front Oncol. 2025. PMID: 40206584 Free PMC article.
MeSH terms
LinkOut - more resources
Full Text Sources
Medical
Research Materials