Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Sep 25;20(1):105.
doi: 10.1186/s13000-025-01686-3.

Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency

Affiliations

Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency

Renuka Agrawal et al. Diagn Pathol. .

Abstract

Medical healthcare has advanced substantially due to advancements in Artificial Intelligence (AI) techniques for early disease detection alongside support for clinical decisions. However, a gap exists in widespread adoption of results of these algorithms by public due to black box nature of models. The undisclosed nature of these systems creates fundamental obstacles within medical sectors that handle crucial cases because medical practitioners needs to understand the reasoning behind the outcome of a particular disease. A hybrid Machine Learning (ML) framework integrating Explainable AI (XAI) strategies that will improve both predictive performance and interpretability is explored in proposed work. The system leverages Decision Trees, Naive Bayes, Random Forests and XGBoost algorithms to predict the medical condition risks of Diabetes, Anaemia, Thalassemia, Heart Disease, Thrombocytopenia within its framework. SHAP (SHapley Additive exPlanations) together with LIME (Local Interpretable Model-agnostic Explanations) adds functionality to the proposed system by displaying important features contributing to each prediction. The framework upholds an accuracy of 99.2% besides the ability to provide understandable explanations for interpretation of model outputs. The performance combined with interpretability from the framework enables clinical practitioners to make decisions through an understanding of AI-generated outputs thereby reducing distrust in AI-driven healthcare.

Keywords: Explainable artificial intelligence (XAI); Healthcare prediction; Local interpretable model agnostic explanations (LIME); Machine learning (ML); Random forest; SHapley additive exPlanations (SHAP); XGBoost.

PubMed Disclaimer

Conflict of interest statement

Declarations. Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Flow of methodology for healthcare prediction system
Fig. 2
Fig. 2
Disease distribution analysis
Fig. 3
Fig. 3
Correlation matrix
Fig. 4
Fig. 4
Health feature boxplots by diseases
Fig. 5
Fig. 5
Error bar plots for ML Models
Fig. 6
Fig. 6
LIME explanation for anemia prediction. low hemoglobin and RBC counts contributed most significantly
Fig. 7
Fig. 7
LIME explanation for thrombocytopenia prediction. low platelet count was the key factor
Fig. 8
Fig. 8
LIME explanation for thalassemia prediction
Fig. 9
Fig. 9
LIME explanation for diabetes prediction
Fig. 10
Fig. 10
LIME explanation for heart disease prediction

References

    1. Mirbabaie M, Stieglitz S, Frick NRJ. Artificial intelligence in disease diagnostics: A critical review and classification on the current state of research guiding future direction. Health Technol. 2021;11:693–731. 10.1007/s12553-021-00555-5. - DOI
    1. Biswas AA. A comprehensive review of explainable AI for disease diagnosis. Array. 2024;22:100345. 10.1016/j.array.2024.100345. - DOI
    1. Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689. 10.1186/s12909-023-04698-z. - DOI - PMC - PubMed
    1. Khanom F, Biswas S, Uddin MS, et al. XEMLPD: an explainable ensemble machine learning approach for Parkinson disease diagnosis with optimized features. Int J Speech Technol. 2024;27:1055–83. 10.1007/s10772-024-10152-2. - DOI
    1. Kamal Alsheref F, Hassan W. Blood diseases detection using classical machine learning algorithms. Int J Adv Comput Sci Appl. 2019;10. 10.14569/IJACSA.2019.0100712.

LinkOut - more resources