Breast Lesion Detection Using Weakly Dependent Customized Features and Machine Learning Models with Explainable Artificial Intelligence
- PMID: 40422992
- PMCID: PMC12112174
- DOI: 10.3390/jimaging11050135
Breast Lesion Detection Using Weakly Dependent Customized Features and Machine Learning Models with Explainable Artificial Intelligence
Abstract
This research proposes a novel strategy for accurate breast lesion classification that combines explainable artificial intelligence (XAI), machine learning (ML) classifiers, and customized weakly dependent features from ultrasound (BU) images. Two new weakly dependent feature classes are proposed to improve the diagnostic accuracy and diversify the training data. These are based on image intensity variations and the area of bounded partitions and provide complementary rather than overlapping information. ML classifiers such as Random Forest (RF), Extreme Gradient Boosting (XGB), Gradient Boosting Classifiers (GBC), and LASSO regression were trained with both customized feature classes. To validate the reliability of our study and the results obtained, we conducted a statistical analysis using the McNemar test. Later, an XAI model was combined with ML to tackle the influence of certain features, the constraints of feature selection, and the interpretability capabilities across various ML models. LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) models were used in the XAI process to enhance the transparency and interpretation in clinical decision-making. The results revealed common relevant features for the malignant class, consistently identified by all of the classifiers, and for the benign class. However, we observed variations in the feature importance rankings across the different classifiers. Furthermore, our study demonstrates that the correlation between dependent features does not impact explainability.
Keywords: LIME; SHAP; XAI; dependent features; machine learning.
Conflict of interest statement
The authors declare no conflicts of interest.
Figures




Similar articles
-
Explainable Artificial Intelligence in Quantifying Breast Cancer Factors: Saudi Arabia Context.Healthcare (Basel). 2024 May 15;12(10):1025. doi: 10.3390/healthcare12101025. Healthcare (Basel). 2024. PMID: 38786433 Free PMC article.
-
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023. Front Artif Intell. 2023. PMID: 38111787 Free PMC article.
-
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030. J Med Internet Res. 2025. PMID: 40306634 Free PMC article.
-
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626. Transl Cancer Res. 2022. PMID: 36388027 Free PMC article. Review.
-
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging-A Systematic Review.Diagnostics (Basel). 2025 May 31;15(11):1399. doi: 10.3390/diagnostics15111399. Diagnostics (Basel). 2025. PMID: 40506971 Free PMC article. Review.
References
-
- World Cancer Research Found International Breast Cancer Statistics. [(accessed on 15 July 2024)]. Available online: https://www.wcrf.org/cancer-trends/
-
- American Cancer Society How Common Is Breast Cancer? [(accessed on 15 July 2024)]. Available online: https://www.cancer.org/cancer/types/breast-cancer.
-
- World Health Organization Breast Cancer. [(accessed on 15 July 2024)]. Available online: https://www.who.int/news-room/fact-sheets/detail/breast-cancer.
LinkOut - more resources
Full Text Sources
Research Materials