Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency
- PMID: 40999511
- PMCID: PMC12465982
- DOI: 10.1186/s13000-025-01686-3
Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency
Abstract
Medical healthcare has advanced substantially due to advancements in Artificial Intelligence (AI) techniques for early disease detection alongside support for clinical decisions. However, a gap exists in widespread adoption of results of these algorithms by public due to black box nature of models. The undisclosed nature of these systems creates fundamental obstacles within medical sectors that handle crucial cases because medical practitioners needs to understand the reasoning behind the outcome of a particular disease. A hybrid Machine Learning (ML) framework integrating Explainable AI (XAI) strategies that will improve both predictive performance and interpretability is explored in proposed work. The system leverages Decision Trees, Naive Bayes, Random Forests and XGBoost algorithms to predict the medical condition risks of Diabetes, Anaemia, Thalassemia, Heart Disease, Thrombocytopenia within its framework. SHAP (SHapley Additive exPlanations) together with LIME (Local Interpretable Model-agnostic Explanations) adds functionality to the proposed system by displaying important features contributing to each prediction. The framework upholds an accuracy of 99.2% besides the ability to provide understandable explanations for interpretation of model outputs. The performance combined with interpretability from the framework enables clinical practitioners to make decisions through an understanding of AI-generated outputs thereby reducing distrust in AI-driven healthcare.
Keywords: Explainable artificial intelligence (XAI); Healthcare prediction; Local interpretable model agnostic explanations (LIME); Machine learning (ML); Random forest; SHapley additive exPlanations (SHAP); XGBoost.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Competing interests: The authors declare no competing interests.
Figures
References
-
- Mirbabaie M, Stieglitz S, Frick NRJ. Artificial intelligence in disease diagnostics: A critical review and classification on the current state of research guiding future direction. Health Technol. 2021;11:693–731. 10.1007/s12553-021-00555-5. - DOI
-
- Biswas AA. A comprehensive review of explainable AI for disease diagnosis. Array. 2024;22:100345. 10.1016/j.array.2024.100345. - DOI
-
- Khanom F, Biswas S, Uddin MS, et al. XEMLPD: an explainable ensemble machine learning approach for Parkinson disease diagnosis with optimized features. Int J Speech Technol. 2024;27:1055–83. 10.1007/s10772-024-10152-2. - DOI
-
- Kamal Alsheref F, Hassan W. Blood diseases detection using classical machine learning algorithms. Int J Adv Comput Sci Appl. 2019;10. 10.14569/IJACSA.2019.0100712.
MeSH terms
LinkOut - more resources
Full Text Sources
Research Materials
Miscellaneous
