Tailored self-supervised pretraining improves brain MRI diagnostic models
- PMID: 40252479
- DOI: 10.1016/j.compmedimag.2025.102560
Tailored self-supervised pretraining improves brain MRI diagnostic models
Abstract
Self-supervised learning has shown potential in enhancing deep learning methods, yet its application in brain magnetic resonance imaging (MRI) analysis remains underexplored. This study seeks to leverage large-scale, unlabeled public brain MRI datasets to improve the performance of deep learning models in various downstream tasks for the development of clinical decision support systems. To enhance training efficiency, data filtering methods based on image entropy and slice positions were developed, condensing a combined dataset of approximately 2 million images from fastMRI-brain, OASIS-3, IXI, and BraTS21 into a more focused set of 250 K images enriched with brain features. The Momentum Contrast (MoCo) v3 algorithm was then employed to learn these image features, resulting in robustly pretrained models specifically tailored to brain MRI. The pretrained models were subsequently evaluated in tumor classification, lesion detection, hippocampal segmentation, and image reconstruction tasks. The results demonstrate that our brain MRI-oriented pretraining outperformed both ImageNet pretraining and pretraining on larger multi-organ, multi-modality medical datasets, achieving a ∼2.8 % increase in 4-class tumor classification accuracy, a ∼0.9 % improvement in tumor detection mean average precision, a ∼3.6 % gain in adult hippocampal segmentation Dice score, and a ∼0.1 PSNR improvement in reconstruction at 2-fold acceleration. This study underscores the potential of self-supervised learning for brain MRI using large-scale, tailored datasets derived from public sources.
Keywords: Brain imaging, Tumor classification; Representation learning, Feature extraction; Self-supervised learning.
Copyright © 2025 Elsevier Ltd. All rights reserved.
Conflict of interest statement
Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Similar articles
-
Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.Med Phys. 2025 Mar;52(3):1573-1588. doi: 10.1002/mp.17541. Epub 2024 Dec 5. Med Phys. 2025. PMID: 39636237
-
Self-supervised-RCNN for medical image segmentation with limited data annotation.Comput Med Imaging Graph. 2023 Oct;109:102297. doi: 10.1016/j.compmedimag.2023.102297. Epub 2023 Sep 9. Comput Med Imaging Graph. 2023. PMID: 37729826
-
Image-level supervision and self-training for transformer-based cross-modality tumor segmentation.Med Image Anal. 2024 Oct;97:103287. doi: 10.1016/j.media.2024.103287. Epub 2024 Jul 31. Med Image Anal. 2024. PMID: 39111265
-
A review of self-supervised, generative, and few-shot deep learning methods for data-limited magnetic resonance imaging segmentation.NMR Biomed. 2024 Aug;37(8):e5143. doi: 10.1002/nbm.5143. Epub 2024 Mar 24. NMR Biomed. 2024. PMID: 38523402 Review.
-
A survey of the impact of self-supervised pretraining for diagnostic tasks in medical X-ray, CT, MRI, and ultrasound.BMC Med Imaging. 2024 Apr 6;24(1):79. doi: 10.1186/s12880-024-01253-0. BMC Med Imaging. 2024. PMID: 38580932 Free PMC article. Review.
MeSH terms
LinkOut - more resources
Full Text Sources
Medical