Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2020 Oct 8:14:779.
doi: 10.3389/fnins.2020.00779. eCollection 2020.

A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis

Affiliations
Review

A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis

Li Zhang et al. Front Neurosci. .

Abstract

Deep learning has recently been used for the analysis of neuroimages, such as structural magnetic resonance imaging (MRI), functional MRI, and positron emission tomography (PET), and it has achieved significant performance improvements over traditional machine learning in computer-aided diagnosis of brain disorders. This paper reviews the applications of deep learning methods for neuroimaging-based brain disorder analysis. We first provide a comprehensive overview of deep learning techniques and popular network architectures by introducing various types of deep neural networks and recent developments. We then review deep learning methods for computer-aided analysis of four typical brain disorders, including Alzheimer's disease, Parkinson's disease, Autism spectrum disorder, and Schizophrenia, where the first two diseases are neurodegenerative disorders and the last two are neurodevelopmental and psychiatric disorders, respectively. More importantly, we discuss the limitations of existing studies and present possible future directions.

Keywords: Alzheimer's disease; Parkinson's disease; autism spectrum disorder; deep learning; neuroimage; schizophrenia.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Architectures of the single-layer (A) and multi-layer (B) neural networks. The blue, green, and orange solid circles represent the input visible, hidden, and output units, respectively.
Figure 2
Figure 2
Architectures of a stacked auto-encoder. The blue and red dotted boxes represent the encoding and decoding stage, respectively. The blue solid circles are the input and output units, which have the same number nodes. The orange solid circles represent the latent representation, and the green solid circles represent any hidden layers.
Figure 3
Figure 3
Schematic illustration of Deep Belief Networks (A) and Deep Boltzmann Machine (B). The double-headed arrow represents the undirected connection between the two neighboring layers, and the single-headed arrow is the directed connection. The top two layers of the DBN form an undirected generative model and the remaining layers form directed generative model. But all layers of the DBM are undirected generative model.
Figure 4
Figure 4
Architecture of Generative Adversarial Networks. “R” and “F” represents the real and fake label, respectively.
Figure 5
Figure 5
Architecture of convolutional neural networks. Note that an implicit rectified linear unit (ReLU) non-linearity is applied after every layer. The natural images as input data in Krizhevsky et al. (2012) are replaced by brain MR images.
Figure 6
Figure 6
Architecture of graph convolutional networks. To keep the figure simple, the softmax output layer is not shown.
Figure 7
Figure 7
Architectures of long short-term memory (A) and gated recurrent unit (B) In the subfigure (A), the blue, green, and yellow represent the forget gate ft, input, gate it, and output gate ot, respectively. In the subfigure (B), the blue and yellow represent the reset gate rt and update gate zt, respectively. xt is input vector and ht is the hidden state. To keep the figure simple, biases are not shown.

References

    1. Adeli E., Thung K.-H., An L., Wu G., Shi F., Wang T., et al. . (2018). Semi-supervised discriminative classification robust to sample-outliers and feature-noises. IEEE Trans. Pattern Anal. Mach. Intell. 41, 515–522. 10.1109/TPAMI.2018.2794470 - DOI - PMC - PubMed
    1. Anirudh R., Thiagarajan J. J. (2017). Bootstrapping graph convolutional neural networks for autism spectrum disorder classification. arXiv 1704.07487.
    1. Arjovsky M., Chintala S., Bottou L. (2017). Wasserstein GAN. arXiv 1701.07875.
    1. Bahdanau D., Chorowski J., Serdyuk D., Brakel P., Bengio Y. (2016). “End-to-end attention-based large vocabulary speech recognition,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Shanghai: IEEE; ), 4945–4949. 10.1109/ICASSP.2016.7472618 - DOI
    1. Bengio Y., Lamblin P., Popovici D., Larochelle H., Montreal U. (2007). “Greedy Layer-Wise Training of Deep Networks,” in Advances in Neural Information Processing Systems (Vancouver, BC: ACM; ), 153–160. 10.5555/2976456.2976476 - DOI

LinkOut - more resources