An explainable three dimensional framework to uncover learning patterns: A unified look in variable sulci recognition
- PMID: 41151346
- DOI: 10.1016/j.artmed.2025.103286
An explainable three dimensional framework to uncover learning patterns: A unified look in variable sulci recognition
Abstract
The significant features identified in a representative subset of the dataset during the learning process of an artificial intelligence model are referred to as a 'global' explanation. Three-dimensional (3D) global explanations are crucial in neuroimaging, where a complex representational space demands more than basic two-dimensional interpretations. However, current studies in the literature often lack the accuracy, comprehensibility, and 3D global explanations needed in neuroimaging and beyond. To address this gap, we developed an explainable artificial intelligence (XAI) 3D-Framework capable of providing accurate, low-complexity global explanations. We evaluated the framework using various 3D deep learning models trained on a well-annotated cohort of 596 structural MRIs. The binary classification task focused on detecting the presence or absence of the paracingulate sulcus (PCS), a highly variable brain structure associated with psychosis. Our framework integrates statistical features (Shape) and XAI methods (GradCam and SHAP) with dimensionality reduction, ensuring that explanations reflect both model learning and cohort-specific variability. By combining Shape, GradCam, and SHAP, our framework reduces inter-method variability, enhancing the faithfulness and reliability of global explanations. These robust explanations facilitated the identification of critical sub-regions, including the posterior temporal and internal parietal regions, as well as the cingulate region and thalamus, suggesting potential genetic or developmental influences. For the first time, this XAI 3D-Framework leverages global explanations to uncover the broader developmental context of specific cortical features. This approach advances the fields of deep learning and neuroscience by offering insights into normative brain development and atypical trajectories linked to mental illness, paving the way for more reliable and interpretable AI applications in neuroimaging.
Keywords: Brain classification; Brain pattern; Deep learning; Paracingulate; Sulcal pattern; XAI.
Copyright © 2025 The Authors. Published by Elsevier B.V. All rights reserved.
Conflict of interest statement
Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Graham K Murray reports financial support was provided by UK Research and Innovation Medical Research Council. Graham K Murray reports a relationship with ieso Digital health that includes: consulting or advisory. GKM consults for ieso digital health. All other authors declare that they have no competing interests. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
MeSH terms
LinkOut - more resources
Full Text Sources
Medical
Miscellaneous
