Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images
- PMID: 39680317
- DOI: 10.1007/s12194-024-00871-1
Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images
Abstract
This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson's correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84-30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858-0.9868. Similarly, PSNR values ranged 32.34-32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941-0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson's correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.
Keywords: Deep-learning methods; Image processing; Peak signal-to-noise ratio; Structural similarity; Super-resolution.
© 2024. The Author(s), under exclusive licence to Japanese Society of Radiological Technology and Japan Society of Medical Physics.
Conflict of interest statement
Declarations. Conflict of interest: The authors declare that there are no conflicts of interest regarding this study. All authors fully agree with the content of this manuscript, and no funding from specific companies or organizations, or other competing interests, have influenced the results of this research. Ethical approval: This study was conducted with the approval of the ethics committee of the facility to which the author belongs. The images used in this study were deidentified and anonymized to protect the personal information of patients.
References
-
- Ronneberger O, Fischer P, Brox T. 2015–u-net. MIC- CAI;2015; 1–8. https://doi.org/10.48550/arXiv.1505.04597
-
- Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proc - 30th IEEE Conf Comput Vis Pattern Recognition. CVPR 2017. 2017;2017-January: 5967–5976. https://doi.org/10.1109/CVPR.2017.632
-
- Akagi M, Nakamura Y, Higaki T, et al. Deep learning reconstruction improves image quality of abdominal ultra-high-resolution CT. Eur Radiol. 2019;29:6163–71. https://doi.org/10.1007/s00330-019-06170-3 . - DOI - PubMed
-
- Wang S, Su Z, Ying L, et al. Accelerating magnetic resonance imaging via deep learning. Proc – Int Symp Biomed Imaging. 2016;2016: 514–517. https://doi.org/10.1109/ISBI.2016.7493320
-
- Rajpurkar P, Irvin J, Ball RL, et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLOS Med. 2018;15: e1002686. https://doi.org/10.1371/journal.pmed.1002686 . - DOI - PubMed - PMC
MeSH terms
LinkOut - more resources
Full Text Sources
Medical
Miscellaneous