Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Dec 1:16:906290.
doi: 10.3389/fnins.2022.906290. eCollection 2022.

Explainable AI: A review of applications to neuroimaging data

Affiliations

Explainable AI: A review of applications to neuroimaging data

Farzad V Farahani et al. Front Neurosci. .

Abstract

Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box" and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.

Keywords: artificial intelligence (AI); deep learning; explainable AI; interpretability; medical imaging; neural networks; neuroimaging.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
(A) Explainable AI methods taxonomy. (B) Functional approaches attempt to disclose the algorithm's mechanistic aspects. (C) Archetypal approaches, like generative methods, seek to uncover input patterns that yield the best model response. (D) Post-hoc perturbation relevance approaches generally change the inputs or the model's components and then attributing relevance proportionally to the amount of the change in model output. (E) Post-hoc decomposition relevance approaches are propagation-based techniques explaining an algorithm's decisions by redistributing the function value (i.e., the neural network's output) to the input variables, often in a layer-by-layer fashion.
Figure 2
Figure 2
The flow diagram of the methodology and selection processes used in this systematic review follows the PRISMA statement (Moher et al., 2009).
Figure 3
Figure 3
Study characteristics. (A) Categorization of included studies, (B) XAI in medical imaging, and (C) a bubble plot that shows mentioned studies by type of XAI method, imaging modality, sample size, and publication trend in recent years.
Figure 4
Figure 4
Co-occurrence network of the commonly used words in reviewed studies.
Figure 5
Figure 5
Assessing the risk of bias using the Cochrane Collaboration's tool.
Figure 6
Figure 6
Requirement for interpretability in medical intelligent systems.

Similar articles

Cited by

References

    1. Adebayo J., Gilmer J., Muelly M., Goodfellow I., Hardt M., Kim B. (2018). “Sanity checks for saliency maps,” in Advances in Neural Information Processing Systems, Vol. 31. - PubMed
    1. Alex V., KP M. S., Chennamsetty S. S., Krishnamurthi G. (2017). “Generative adversarial networks for brain lesion detection,” in Proc.SPIE. - PubMed
    1. Allen J. D., Xie Y., Chen M., Girard L., Xiao G. (2012). Comparing statistical methods for constructing large scale gene networks. PLoS ONE 7, e29348. 10.1371/journal.pone.0029348 - DOI - PMC - PubMed
    1. Alvarez-Melis D., Jaakkola T. S. (2018). On the robustness of interpretability methods. arXiv [Preprint]. arXiv: 1806.08049.
    1. Ancona M., Ceolini E., Öztireli C., Gross M. (2017). Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv [Preprint]. arXiv: 1711.06104.

Publication types

LinkOut - more resources