Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Oct;11(10):3853-3868.
doi: 10.21037/tcr-22-1626.

Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review

Affiliations
Review

Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review

Colton Ladbury et al. Transl Cancer Res. 2022 Oct.

Abstract

Background and objective: Machine learning (ML) models are increasingly being utilized in oncology research for use in the clinic. However, while more complicated models may provide improvements in predictive or prognostic power, a hurdle to their adoption are limits of model interpretability, wherein the inner workings can be perceived as a "black box". Explainable artificial intelligence (XAI) frameworks including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are novel, model-agnostic approaches that aim to provide insight into the inner workings of the "black box" by producing quantitative visualizations of how model predictions are calculated. In doing so, XAI can transform complicated ML models into easily understandable charts and interpretable sets of rules, which can give providers with an intuitive understanding of the knowledge generated, thus facilitating the deployment of such models in routine clinical workflows.

Methods: We performed a comprehensive, non-systematic review of the latest literature to define use cases of model-agnostic XAI frameworks in oncologic research. The examined database was PubMed/MEDLINE. The last search was run on May 1, 2022.

Key content and findings: In this review, we identified several fields in oncology research where ML models and XAI were utilized to improve interpretability, including prognostication, diagnosis, radiomics, pathology, treatment selection, radiation treatment workflows, and epidemiology. Within these fields, XAI facilitates determination of feature importance in the overall model, visualization of relationships and/or interactions, evaluation of how individual predictions are produced, feature selection, identification of prognostic and/or predictive thresholds, and overall confidence in the models, among other benefits. These examples provide a basis for future work to expand on, which can facilitate adoption in the clinic when the complexity of such modeling would otherwise be prohibitive.

Conclusions: Model-agnostic XAI frameworks offer an intuitive and effective means of describing oncology ML models, with applications including prognostication and determination of optimal treatment regimens. Using such frameworks presents an opportunity to improve understanding of ML models, which is a critical step to their adoption in the clinic.

Keywords: Explainable artificial intelligence (XAI); Local Interpretable Model-agnostic Explanations (LIME); SHapley Additive exPlanations (SHAP); machine learning (ML).

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://tcr.amegroups.com/article/view/10.21037/tcr-22-1626/coif). AA serves as an unpaid editorial board member of Translational Cancer Research from December 2019 to November 2023. CL reports grant funding from RefleXion Medical. ASR reports funding from NIH NLM grant R01LM013138, NIH NLM grant R01LM013876, NIH NCI grant U01CA232216 and support from Dr. Susumu Ohno Endowed Chair in Theoretical Biology. AA has grant funding from AstraZeneca. The other authors have no conflicts of interest to declare.

Figures

Figure 1
Figure 1
Use of XAI in visualizing the inside of the “black box”. XAI, explainable artificial intelligence.
Figure 2
Figure 2
SHAP plots visualizing non-linear interactions between prognostic features in prostate cancer, including interaction between Gleason score and PSA (A), and PPCs and Gleason score (B). [Credit: ref. (15)]. SHAP, SHapley Additive exPlanations; PSA, prostate specific antigen; PPC, percent positive core.
Figure 3
Figure 3
SHAP plots visualizing neural network identification of a normal brain and meningioma. [Credit: ref. (32)]. SHAP, SHapley Additive exPlanations.
Figure 4
Figure 4
SHAP dependence plots (A,C) and interaction plots (B,D) illustrating thresholds for lymph node burden predictive of benefit of PORT in completely resected N2 NSCLC. [Credit: ref. (45)]. SHAP, SHapley Additive exPlanations; PORT, post-operative radiotherapy; NSCLC, non-small cell lung cancer.

Comment in

References

    1. Rajkomar A, Dean J, Kohane I. Machine Learning in Medicine. N Engl J Med 2019;380:1347-58. 10.1056/NEJMra1814259 - DOI - PubMed
    1. Papernot N, McDaniel P, Goodfellow I, et al. editors. Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017.
    1. Diprose WK, Buist N, Hua N, et al. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J Am Med Inform Assoc 2020;27:592-600. 10.1093/jamia/ocz229 - DOI - PMC - PubMed
    1. Price WN. Big data and black-box medical algorithms. Sci Transl Med 2018;10:eaao5333. 10.1126/scitranslmed.aao5333 - DOI - PMC - PubMed
    1. Vilone G, Longo L. Explainable artificial intelligence: a systematic review. arXiv preprint 2020. arXiv:200600093.