ADFound: A Foundation Model for Diagnosis and Prognosis of Alzheimer's Disease
- PMID: 40460008
- DOI: 10.1109/JBHI.2025.3576436
ADFound: A Foundation Model for Diagnosis and Prognosis of Alzheimer's Disease
Abstract
Alzheimer's disease (AD) is an incurable neurodegenerative disorder characterized by progressive cognitive and functional decline. Consequently, early diagnosis and accurate prediction of disease progression are of paramount importance and inherently complex, necessitating the integration of multi-modal data. However, most existing methods are task-specific models that lack generalization ability, addressing only one task at a time and failing to simultaneously assess disease diagnosis and progression. In this paper, we introduce ADFound, the first foundation model for AD that serves as a basis for various downstream tasks, such as diagnosis and prognosis, with high generalization capability. ADFound leverages a substantial amount of unlabeled 3D multi-modal neuroimaging, including paired and unpaired data, to achieve its objectives. Specifically, ADFound is developed upon the Multi-modal Vim encoder by Vision Mamba block to capture long-range dependencies inherent in 3D multi-modal medical images. To efficiently pre-train ADFound on unlabeled paired and upaired multi-modal neuroimaging data, we proposed a novel self-supervised learning framework that integrates multi-modal masked autoencoder (MAE) and contrastive learning. The multi-modal MAE aims to learn local relations among modalities by reconstructing images with unmasked image patches. Additionally, we introduce a Dual Contrastive Learning for Multi-modal Data to enhance the discriminative capabilities of multi-modal representations from intra-modal and inter-modal perspectives. Our experiments demonstrate that ADFound outperforms stateof-the-art methods across a wide range of downstream tasks relevant to the diagnosis and prognosis of AD. Furthermore, the results indicate that our foundation model can be extended to more modalities, such as non-image data, showing its versatility. The code is available at https: //github.com/guangqianyang/ADFound.git.
Similar articles
-
Alzheimer's disease diagnosis from multi-modal data via feature inductive learning and dual multilevel graph neural network.Med Image Anal. 2024 Oct;97:103213. doi: 10.1016/j.media.2024.103213. Epub 2024 May 28. Med Image Anal. 2024. PMID: 38850625
-
A modality-collaborative convolution and transformer hybrid network for unpaired multi-modal medical image segmentation with limited annotations.Med Phys. 2023 Sep;50(9):5460-5478. doi: 10.1002/mp.16338. Epub 2023 Mar 15. Med Phys. 2023. PMID: 36864700
-
HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease.Comput Biol Med. 2024 Jun;176:108564. doi: 10.1016/j.compbiomed.2024.108564. Epub 2024 May 8. Comput Biol Med. 2024. PMID: 38744010
-
Weighted Multi-Modal Contrastive Learning Based Hybrid Network for Alzheimer's Disease Diagnosis.IEEE Trans Neural Syst Rehabil Eng. 2025;33:1135-1144. doi: 10.1109/TNSRE.2025.3549730. Epub 2025 Mar 19. IEEE Trans Neural Syst Rehabil Eng. 2025. PMID: 40063426
-
Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer.Comput Med Imaging Graph. 2025 Apr;121:102502. doi: 10.1016/j.compmedimag.2025.102502. Epub 2025 Feb 3. Comput Med Imaging Graph. 2025. PMID: 39919535
LinkOut - more resources
Full Text Sources
Miscellaneous