BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis
- PMID: 35167997
- DOI: 10.1016/j.cmpb.2022.106676
BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis
Abstract
Background and objective: Multi-modal medical images, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), have been widely used for the diagnosis of brain disorder diseases like Alzheimer's disease (AD) since they can provide various information. PET scans can detect cellular changes in organs and tissues earlier than MRI. Unlike MRI, PET data is difficult to acquire due to cost, radiation, or other limitations. Moreover, PET data is missing for many subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. To solve this problem, a 3D end-to-end generative adversarial network (named BPGAN) is proposed to synthesize brain PET from MRI scans, which can be used as a potential data completion scheme for multi-modal medical image research.
Methods: We propose BPGAN, which learns an end-to-end mapping function to transform the input MRI scans to their underlying PET scans. First, we design a 3D multiple convolution U-Net (MCU) generator architecture to improve the visual quality of synthetic results while preserving the diverse brain structures of different subjects. By further employing a 3D gradient profile (GP) loss and structural similarity index measure (SSIM) loss, the synthetic PET scans have higher-similarity to the ground truth. In this study, we explore alternative data partitioning ways to study their impact on the performance of the proposed method in different medical scenarios.
Results: We conduct experiments on a publicly available ADNI database. The proposed BPGAN is evaluated by mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and SSIM, superior to other compared models in these quantitative evaluation metrics. Qualitative evaluations also validate the effectiveness of our approach. Additionally, combined with MRI and our synthetic PET scans, the accuracies of multi-class AD diagnosis on dataset-A and dataset-B are 85.00% and 56.47%, which have been improved by about 1% and 1%, respectively, compared to the stand-alone MRI.
Conclusions: The experimental results of quantitative measures, qualitative displays, and classification evaluation demonstrate that the synthetic PET images by BPGAN are reasonable and high-quality, which provide complementary information to improve the performance of AD diagnosis. This work provides a valuable reference for multi-modal medical image analysis.
Keywords: Alzheimer’s disease; Generative adversarial networks; MRI; Medical imaging synthesis; PET.
Copyright © 2022. Published by Elsevier B.V.
Conflict of interest statement
Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Similar articles
-
Deep convolutional generative adversarial network for Alzheimer's disease classification using positron emission tomography (PET) and synthetic data augmentation.Microsc Res Tech. 2021 Dec;84(12):3023-3034. doi: 10.1002/jemt.23861. Epub 2021 Jul 10. Microsc Res Tech. 2021. PMID: 34245203
-
Generation of synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks and conditional denoising diffusion probabilistic models based on simultaneous 18F-FDG PET/MR image data of pyogenic spondylodiscitis.Spine J. 2024 Aug;24(8):1467-1477. doi: 10.1016/j.spinee.2024.04.007. Epub 2024 Apr 12. Spine J. 2024. PMID: 38615932
-
Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss.J Digit Imaging. 2023 Aug;36(4):1794-1807. doi: 10.1007/s10278-022-00697-6. Epub 2023 Mar 1. J Digit Imaging. 2023. PMID: 36856903 Free PMC article.
-
Deep Convolutional Neural Networks With Ensemble Learning and Generative Adversarial Networks for Alzheimer's Disease Image Data Classification.Front Aging Neurosci. 2021 Aug 17;13:720226. doi: 10.3389/fnagi.2021.720226. eCollection 2021. Front Aging Neurosci. 2021. PMID: 34483890 Free PMC article. Review.
-
CT artifact correction for sparse and truncated projection data using generative adversarial networks.Med Phys. 2021 Feb;48(2):615-626. doi: 10.1002/mp.14504. Epub 2020 Dec 30. Med Phys. 2021. PMID: 32996149 Review.
Cited by
-
A multi-view learning approach with diffusion model to synthesize FDG PET from MRI T1WI for diagnosis of Alzheimer's disease.Alzheimers Dement. 2025 Feb;21(2):e14421. doi: 10.1002/alz.14421. Epub 2024 Dec 6. Alzheimers Dement. 2025. PMID: 39641380 Free PMC article.
-
AN INTERPRETABLE GENERATIVE MULTIMODAL NEUROIMAGING-GENOMICS FRAMEWORK FOR DECODING ALZHEIMER'S DISEASE.ArXiv [Preprint]. 2025 Feb 4:arXiv:2406.13292v3. ArXiv. 2025. PMID: 38947922 Free PMC article. Preprint.
-
Deep Learning-Based Diagnosis Algorithm for Alzheimer's Disease.J Imaging. 2024 Dec 23;10(12):333. doi: 10.3390/jimaging10120333. J Imaging. 2024. PMID: 39728230 Free PMC article.
-
MRAβ: A multimodal MRI-derived amyloid-β biomarker for Alzheimer's disease.Hum Brain Mapp. 2023 Oct 15;44(15):5139-5152. doi: 10.1002/hbm.26452. Epub 2023 Aug 14. Hum Brain Mapp. 2023. PMID: 37578386 Free PMC article.
-
Deep-learning predicted PET can be subtracted from the true clinical fluorodeoxyglucose PET co-registered to MRI to identify the epileptogenic zone in focal epilepsy.Epilepsia Open. 2023 Dec;8(4):1440-1451. doi: 10.1002/epi4.12820. Epub 2023 Aug 29. Epilepsia Open. 2023. PMID: 37602538 Free PMC article.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
Medical