Explainable AI: A review of applications to neuroimaging data
- PMID: 36583102
- PMCID: PMC9793854
- DOI: 10.3389/fnins.2022.906290
Explainable AI: A review of applications to neuroimaging data
Abstract
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box" and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
Keywords: artificial intelligence (AI); deep learning; explainable AI; interpretability; medical imaging; neural networks; neuroimaging.
Copyright © 2022 Farahani, Fiok, Lahijanian, Karwowski and Douglas.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures






Similar articles
-
Survey of Explainable AI Techniques in Healthcare.Sensors (Basel). 2023 Jan 5;23(2):634. doi: 10.3390/s23020634. Sensors (Basel). 2023. PMID: 36679430 Free PMC article. Review.
-
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18. Comput Biol Med. 2023. PMID: 36863192 Review.
-
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237. Diagnostics (Basel). 2022. PMID: 35204328 Free PMC article. Review.
-
Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization.Phys Med. 2021 Mar;83:108-121. doi: 10.1016/j.ejmp.2021.03.009. Epub 2021 Mar 22. Phys Med. 2021. PMID: 33765601 Review.
-
BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.Comput Biol Med. 2025 Jun;191:110124. doi: 10.1016/j.compbiomed.2025.110124. Epub 2025 Apr 15. Comput Biol Med. 2025. PMID: 40239236
Cited by
-
Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging.Diagnostics (Basel). 2023 Apr 27;13(9):1571. doi: 10.3390/diagnostics13091571. Diagnostics (Basel). 2023. PMID: 37174962 Free PMC article. Review.
-
Large-Scale Neuroimaging of Mental Illness.Curr Top Behav Neurosci. 2024;68:371-397. doi: 10.1007/7854_2024_462. Curr Top Behav Neurosci. 2024. PMID: 38554248 Review.
-
Leveraging AI-Driven Neuroimaging Biomarkers for Early Detection and Social Function Prediction in Autism Spectrum Disorders: A Systematic Review.Healthcare (Basel). 2025 Jul 22;13(15):1776. doi: 10.3390/healthcare13151776. Healthcare (Basel). 2025. PMID: 40805809 Free PMC article. Review.
-
Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence.Bioengineering (Basel). 2024 Apr 12;11(4):369. doi: 10.3390/bioengineering11040369. Bioengineering (Basel). 2024. PMID: 38671790 Free PMC article. Review.
-
Explainable brain age prediction: a comparative evaluation of morphometric and deep learning pipelines.Brain Inform. 2024 Dec 18;11(1):33. doi: 10.1186/s40708-024-00244-9. Brain Inform. 2024. PMID: 39692946 Free PMC article.
References
-
- Adebayo J., Gilmer J., Muelly M., Goodfellow I., Hardt M., Kim B. (2018). “Sanity checks for saliency maps,” in Advances in Neural Information Processing Systems, Vol. 31. - PubMed
-
- Alex V., KP M. S., Chennamsetty S. S., Krishnamurthi G. (2017). “Generative adversarial networks for brain lesion detection,” in Proc.SPIE. - PubMed
-
- Alvarez-Melis D., Jaakkola T. S. (2018). On the robustness of interpretability methods. arXiv [Preprint]. arXiv: 1806.08049.
-
- Ancona M., Ceolini E., Öztireli C., Gross M. (2017). Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv [Preprint]. arXiv: 1711.06104.
Publication types
LinkOut - more resources
Full Text Sources