Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024:12:53277-53292.
doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.

A Framework for Interpretability in Machine Learning for Medical Imaging

Affiliations

A Framework for Interpretability in Machine Learning for Medical Imaging

Alan Q Wang et al. IEEE Access. 2024.

Abstract

Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.

Keywords: Interpretability; explainability; machine learning; medical imaging.

PubMed Disclaimer

Figures

FIGURE 1.
FIGURE 1.
A framework for interpretability in MLMI.
FIGURE 2.
FIGURE 2.
Common tasks in MLMI. Tasks are primarily characterized by the structures of their input features and output predictions. In MLMI, inputs are images or features derived from images, sometimes combined with meta-data such as patient information. The structure of output predictions are determined by the task.
FIGURE 3.
FIGURE 3.
Graphical overview of the elements of interpretable MLMI.

Similar articles

Cited by

References

    1. Dembrower K, Crippa A, Colón E, Eklund M, and Strand F, “Artificial intelligence for breast cancer detection in screening mammography in sweden: A prospective, population-based, paired-reader, non-inferiority study,” Lancet Digit. Health, vol. 5, no. 10, pp. e703–e711, Oct. 2023. - PubMed
    1. Erickson BJ, Korfiatis P, Akkus Z, and Kline TL, “Machine learning for medical imaging,” Radiographics, vol. 37, no. 2, pp. 505–515, Mar./Apr. 2017. - PMC - PubMed
    1. Shen D, Wu G, and Suk HI, “Deep learning in medical image analysis,” Annu. Rev. Biomed. Eng, vol. 19, no. 1, pp. 221–248, 2017. - PMC - PubMed
    1. Rajpurkar P. and Lungren MP, “The current and future state of AI interpretation of medical images,” Obstetrical Gynecol. Surv, vol. 78, no. 11, pp. 634–635, 2023. - PubMed
    1. Reyes M, Meier R, Pereira S, Silva CA, Dahlweid F-M, Tengg-Kobligk HV, Summers RM, and Wiest R, “On the interpretability of artificial intelligence in radiology: Challenges and opportunities,” Radiol., Artif. Intell, vol. 2, no. 3, May 2020, Art. no. e190043. - PMC - PubMed

LinkOut - more resources