Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Oct;38(10):2411-2422.
doi: 10.1109/TMI.2019.2913158. Epub 2019 Apr 25.

Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data

Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data

Tao Zhou et al. IEEE Trans Med Imaging. 2019 Oct.

Abstract

The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Illustration of our proposed AD diagnosis framework. First, we project original features into a latent representation space. Within this space, we utilize samples with complete multi-modality data to learn a common latent representation, and utilize samples with incomplete multi-modality to learn modality-specific representations. Finally, the latent feature representations are projected to the label space for AD diagnosis.
Fig. 2.
Fig. 2.
Classification results in terms of ACC (top) and AUC (bottom) achieved by 9 different methods for three classification tasks, i.e., NC vs. MCI vs. AD (left), NC vs. sMCI vs. pMCI/AD (middle), and sMCI vs. pMCI (right), using complete multi-modality data in training and testing (except “Ours”, which denotes our method uses all available samples with incomplete multi-modality data for model training). The error bars denote the standard deviations of the results.
Fig. 3.
Fig. 3.
Classification results in terms of ACC (top) and AUC (bottom) achieved by 8 different methods for three classification tasks, i.e., NC vs. MCI vs. AD (left), NC vs. sMCI vs. pMCI vs. AD (middle), and sMCI vs. pMCI (right), using incomplete multi-modality data in training and testing. The error bars denote the standard deviations of the results.
Fig. 4.
Fig. 4.
Classification accuracies of the proposed method trained using all available training data (including incomplete multi-modality data), and tested using only complete multi-modality data. Four different combinations of modalities (i.e., MRI+PET, MRI+SNP, PET+SNP, and MRI+PET+SNP) and three different combinations of disease cohorts (or classification tasks) are used in this group of experiments. The error bars denote the standard deviations of results.
Fig. 5.
Fig. 5.
The classification accuracies of the proposed method, which was trained using all available training data (including incomplete multi-modality data) and tested using all available testing data (including incomplete multi-modality data). Four different combinations of modalities (i.e., MRI+PET, MRI+SNP, PET+SNP, and MRI+PET+SNP) and three different combinations of disease cohorts (or classification tasks) were used in this group of experiments. The error bars denote the standard deviations of results.
Fig. 6.
Fig. 6.
Classification accuracy (i.e., ACC) achieved by different methods using data with r% (Left: r = 10; Right: r = 20) subjects associated with missing MRI (top) or SNP (bottom) data. The error bar denotes the standard deviation of the results.
Fig. 7.
Fig. 7.
Classification accuracy (i.e., ACC) achieved by different methods when 50% subjects are with missing MRI (left) or SNP (right) in sMCI/pMCI classification.
Fig. 8.
Fig. 8.
Classification accuracy (i.e., ACC) achieved by different methods when half of the subjects are missing both MRI and SNP in sMCI/pMCI classification.
Fig. 9.
Fig. 9.
Classification accuracies of our proposed method for the NC/MCI/AD classification task using different settings of hyper-parameters, i.e., β, γ, η ∈ {10−5, . . . , 102}.
Fig. 10.
Fig. 10.
Top ten selected ROIs for MRI data in three classification tasks. From top to bottom: NC/MCI/AD, NC/sMCI/pMCI/AD, and sMCI/pMCI. Here, different colors denote different ROIs.
Fig. 11.
Fig. 11.
Top ten selected ROIs for PET data in three classification tasks. From top to bottom: NC/MCI/AD, NC/sMCI/pMCI/AD, and sMCI/pMCI. Here, different colors denote different ROIs.

Similar articles

Cited by

References

    1. Alzheimer’s Association, “2013 Alzheimer’s disease facts and figures,” Alzheimer’s & dementia, vol. 9, no. 2, pp. 208–245, 2013. - PubMed
    1. Liu M, Zhang D, and Shen D, “Relationship induced multi-template learning for diagnosis of Alzheimer’s disease and mild cognitive impairment,” IEEE Trans. Med. Image, vol. 35, no. 6, pp. 1463–1474, 2016. - PMC - PubMed
    1. Zhou T, Thung K-H, Zhu X, and Shen D, “Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis,” Human Brain Mapping, vol. 40, no. 3, pp. 1001–1016, 2019. - PMC - PubMed
    1. Thung K-H, Yap P-T, Adeli E, Lee S-W, and Shen D, “Conversion and time-to-conversion predictions of mild cognitive impairment using low-rank affinity pursuit denoising and matrix completion,” Medical image analysis, vol. 45, pp. 68–82, 2018. - PMC - PubMed
    1. Pennanen C, Kivipelto M, Tuomainen S, Hartikainen P, Ha nninen T¨, Laakso MP, Hallikainen M, Vanhanen M, Nissinen A, and Helkala E-L, “Hippocampus and entorhinal cortex in mild cognitive impairment and early AD,” Neurobiol. Aging, vol. 25, no. 3, pp. 303–310, 2004. - PubMed

Publication types