Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Apr 6;24(1):79.
doi: 10.1186/s12880-024-01253-0.

A survey of the impact of self-supervised pretraining for diagnostic tasks in medical X-ray, CT, MRI, and ultrasound

Affiliations
Review

A survey of the impact of self-supervised pretraining for diagnostic tasks in medical X-ray, CT, MRI, and ultrasound

Blake VanBerlo et al. BMC Med Imaging. .

Abstract

Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning, leveraging large amounts of unlabelled data. This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging, concentrating on studies that compare self-supervised pretraining to fully supervised learning for diagnostic tasks such as classification and segmentation. The most pertinent finding is that self-supervised pretraining generally improves downstream task performance compared to full supervision, most prominently when unlabelled examples greatly outnumber labelled examples. Based on the aggregate evidence, recommendations are provided for practitioners considering using self-supervised learning. Motivated by limitations identified in current research, directions and practices for future study are suggested, such as integrating clinical knowledge with theoretically justified self-supervised learning methods, evaluating on public datasets, growing the modest body of evidence for ultrasound, and characterizing the impact of self-supervised pretraining on generalization.

Keywords: Computed tomography; Machine learning; Magnetic resonance imaging; Radiology; Representation learning; Self-supervised learning; Ultrasound; X-ray.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Example of a typical SSL workflow, with an application to chest X-ray classification. (1) Self-supervised pretraining: A parameterized model gϕ(fθ(x)) is trained to solve a pretext task using only the chest X-rays. The labels for the pretext task are determined from the inputs themselves, and the model is trained to minimize the pretext objective Lpre. At the end of this step, fθ should output useful feature representations. (2) Supervised fine-tuning: Parameterized model qψ(fθ(x)) is trained to solve the supervised learning task of chest X-ray classification using labels specific to the classification task. Note that the previously learned fθ is reused for this task, as it produces feature representations specific to chest X-rays
Fig. 2
Fig. 2
Breakdown of the papers included in this survey by a imaging modality and b year of publication
Fig. 3
Fig. 3
Examples of generative SSL pretext tasks
Fig. 4
Fig. 4
Examples of predictive SSL pretext tasks
Fig. 5
Fig. 5
A depiction of the forward pass for a positive pair in a standard noncontrastive pretext task. An image is subject to stochastic data transformations twice, producing distorted views xa and xb, which are passed through the feature extractor fθ to yield feature representations ha and hb. The projector gϕ transforms ha and hb into embeddings za and zb respectively. Typically, the objective L is optimized to maximize the similarity of za and zb

Similar articles

Cited by

References

    1. Kelly BS, Judge C, Bollard SM, Clifford SM, Healy GM, Aziz A, et al. Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE) Eur Radiol. 2022;32(11):7998–8007. doi: 10.1007/s00330-022-08784-6. - DOI - PMC - PubMed
    1. Bahadir CD, Omar M, Rosenthal J, Marchionni L, Liechty B, Pisapia DJ, et al. Artificial intelligence applications in histopathology. Nat Rev Electr Eng. 2024. 10.1038/s44287-023-00012-7.
    1. Thomsen K, Iversen L, Titlestad TL, Winther O. Systematic review of machine learning for diagnosis and prognosis in dermatology. J Dermatol Treat. 2020;31(5):496–510. doi: 10.1080/09546634.2019.1682500. - DOI - PubMed
    1. Seifert R, Weber M, Kocakavuk E, Rischpler C, Kersting D. Artificial intelligence and machine learning in nuclear medicine: future perspectives. In: Seminars in nuclear medicine. vol. 51. Elsevier; 2021. pp. 170–7. - PubMed
    1. Du W, Rao N, Liu D, Jiang H, Luo C, Li Z, et al. Review on the Applications of Deep Learning in the Analysis of Gastrointestinal Endoscopy Images. IEEE Access. 2019;7:142053–142069. doi: 10.1109/ACCESS.2019.2944676. - DOI