Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Apr:261:108634.
doi: 10.1016/j.cmpb.2025.108634. Epub 2025 Jan 31.

Why does my medical AI look at pictures of birds? Exploring the efficacy of transfer learning across domain boundaries

Affiliations
Free article

Why does my medical AI look at pictures of birds? Exploring the efficacy of transfer learning across domain boundaries

Frederic Jonske et al. Comput Methods Programs Biomed. 2025 Apr.
Free article

Abstract

Purpose: In medical deep learning, models not trained from scratch are typically fine-tuned based on ImageNet-pretrained models. We posit that pretraining on data from the domain of the downstream task should almost always be preferable.

Materials and methods: We leverage RadNet-12M and RadNet-1.28M, datasets containing >12 million/1.28 million acquired CT image slices from 90,663 individual scans, and explore the efficacy of self-supervised, contrastive pretraining on the medical and natural image domains. We compare the respective performance gains for five downstream tasks. For each experiment, we report accuracy, AUC, or DICE score and uncertainty estimations based on four separate runs. We quantify significance using Welch's t-test. Finally, we perform feature space analysis to characterize the nature of the observed performance gains.

Results: We observe that intra-domain transfer (RadNet pretraining and CT-based tasks) compares favorably to cross-domain transfer (ImageNet pretraining and CT-based tasks), generally achieving comparable or improved performance - Δ = +0.44% (p = 0.541) when fine-tuned on RadNet-1.28M, Δ = +2.07% (p = 0.025) when linearly evaluating on RadNet-1.28M, and Δ = +1.63% (p = 0.114) when fine-tuning on 1 % of RadNet-1.28M data. This intra-domain advantage extends to LiTS 2017, another CT-based dataset, but not to other medical imaging modalities. A corresponding intra-domain advantage was also observed for natural images. Outside the CT image domain, ImageNet-pretrained models generalized better than RadNet-pretrained models. We further demonstrate that pretraining on medical images yields domain-specific features that are preserved during fine-tuning, and which correspond to macroscopic image properties and structures.

Conclusion: We conclude that intra-domain pretraining generally outperforms cross-domain pretraining, but that very narrow domain definitions apply. Put simply, pretraining on CT images instead of natural images yields an advantage when fine-tuning on CT images, and only on CT images. We further conclude that ImageNet pretraining remains a strong baseline, as well as the best choice for pretraining if only insufficient data from the target domain is available. Finally, we publish our pretrained models and pretraining guidelines as a baseline for future research.

Keywords: Deep learning; Domain adaptation; Foundation model; Transfer learning; Unsupervised Pretraining.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

LinkOut - more resources