Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Dec:74:102233.
doi: 10.1016/j.media.2021.102233. Epub 2021 Sep 12.

BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis

Affiliations

BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis

Xiaoxiao Li et al. Med Image Anal. 2021 Dec.

Abstract

Understanding which brain regions are related to a specific neurological disorder or cognitive stimuli has been an important area of neuroimaging research. We propose BrainGNN, a graph neural network (GNN) framework to analyze functional magnetic resonance images (fMRI) and discover neurological biomarkers. Considering the special property of brain graphs, we design novel ROI-aware graph convolutional (Ra-GConv) layers that leverage the topological and functional information of fMRI. Motivated by the need for transparency in medical image analysis, our BrainGNN contains ROI-selection pooling layers (R-pool) that highlight salient ROIs (nodes in the graph), so that we can infer which ROIs are important for prediction. Furthermore, we propose regularization terms-unit loss, topK pooling (TPK) loss and group-level consistency (GLC) loss-on pooling results to encourage reasonable ROI-selection and provide flexibility to encourage either fully individual- or patterns that agree with group-level data. We apply the BrainGNN framework on two independent fMRI datasets: an Autism Spectrum Disorder (ASD) fMRI dataset and data from the Human Connectome Project (HCP) 900 Subject Release. We investigate different choices of the hyper-parameters and show that BrainGNN outperforms the alternative fMRI image analysis methods in terms of four different evaluation metrics. The obtained community clustering and salient ROI detection results show a high correspondence with the previous neuroimaging-derived evidence of biomarkers for ASD and specific task states decoded for HCP. Our code is available at https://github.com/xxlya/BrainGNN_Pytorch.

Keywords: ASD; Biomarker; GNN; fMRI.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig. 1.
Fig. 1.
The overview of the pipeline. fMRI images are parcellated by an atlas and transferred to graphs. Then, the graphs are sent to our proposed BrainGNN, which gives the prediction of specific tasks. Jointly, BrainGNN selects salient brain regions that are informative to the prediction task and clusters brain regions into prediction-related communities.
Fig. 2.
Fig. 2.
(a) introduces the BrainGNN architecture that we propose in this work. BrainGNN is composed of blocks of Ra-GConv layers and R-pool layers. It takes graphs as inputs and outputs graph-level predictions. (b) shows how the Ra-GConv layer embeds node features. First, nodes are softly assigned to communities based on their membership scores to the communities. Each community is associated with a different basis vector. Each node is embedded by the particular basis vectors based on the communities that it belongs to. Then, by aggregating a node’s own embedding and its neighbors’ embedding, the updated representation is assigned to each node on the graph. (c) shows how R-pool selects nodes to keep. First, all the nodes’ representations are projected to a learnable vector. The nodes with large projected values are retained with their corresponding connections.
Fig. 3.
Fig. 3.
The change of the distribution of node pooling scores s^ of the 1st R-pool layer over 100 training epochs presented using kernel density estimate plots. With TopK pooling (TPK) loss, the node pooling scores of the selected nodes and those of the unselected nodes become significantly separate.
Fig. 4.
Fig. 4.
Comparison of Ra-GConv with vanilla-GConv and effect of coefficients of total loss in terms of accuracies on the validation sets.
Fig. 5.
Fig. 5.
Interpretation results of Biopoint task. The selected salient ROIs of three different ASD individuals with different weights λ2 associated with group-level consistency term LGLC . The color bar ranges from 0.1 to 1. The bright-yellow color indicates a high score, while dark-red color indicates a low score. The commonly detected salient ROIs across different individuals are circled in blue.
Fig. 6.
Fig. 6.
Interpretation results of Biopoint task. Interpreting salient ROIs (importance scores are denoted in colorbar) for classifying HC vs. ASD using BrainGNN.
Fig. 7.
Fig. 7.
Interpretation results of HCP task. Interpreting salient ROIs (importance scores are denoted in color-bar) associated with classifying seven tasks.
Fig. 8.
Fig. 8.
The correlation coefficient decoded by NeuroSynth (normalized by dividing it by the largest absolute value of each column for better visualization) between the interpreted biomarkers and the functional keywords for each functional state. A large correlation (in red) along each column indicates large association between the salient ROIs and the functional keyword. Large values (in red) on the diagonal from left-bottom to right-top indicate reasonable decoding; especially a value of 1.00 on the diagonal means that the interpreted salient ROIs of the task state are most correlated with the keywords of that state among all possible states in Neurosynth.
Fig. 9.
Fig. 9.
Clustering ROI using αij+ from the 1st Ra-GConv layer. Different colors denote different communities.
Fig. 10.
Fig. 10.
Visualizing Ra-GConv parameter α+0K×N, which implies the membership score of an ROI to a community. K is the number of communities, represented as the vertical axis. We have K = 8 in our experiment. N is the number of ROIs, represented as the horizontal axis. (a) is the α+ of Biopoint task, and N = 84. (b) is the α+ of HCP task, and N = 268. We split α+ of HCP task into three rows for better visualization (note ROI numbering on horizontal axes).

References

    1. Abraham A, Milham MP, Di Martino A, Craddock RC, Samaras D, Thirion B, Varoquaux G, 2017. Deriving reproducible biomarkers from multi-site resting-state data: an autism-based example. NeuroImage 147, 736–745. - PubMed
    1. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B, 2018. Sanity checks for saliency maps. Advances in Neural Information Processing Systems.
    1. Adeli E, Zhao Q, Zahr NM, Goldstone A, Pfefferbaum A, Sullivan EV, Pohl KM, 2020. Deep learning identifies morphological determinants of sex differences in the pre-adolescent brain. NeuroImage 223, 117293. - PMC - PubMed
    1. Bai Y, Calhoun VD, Wang Y-P, 2020. Integration of multi-task fmri for cognitive study by structure-enforced collaborative regression. In: Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging, 11317. International Society for Optics and Photonics, p. 1131722.
    1. Baker JT, Holmes AJ, Masters GA, Yeo BT, Krienen F, Buckner RL, Öngür D, 2014. Disruption of cortical association networks in schizophrenia and psychotic bipolar disorder. JAMA psychiatry 71 (2), 109–118. - PMC - PubMed

Publication types