Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Oct 15:260:119474.
doi: 10.1016/j.neuroimage.2022.119474. Epub 2022 Jul 13.

SynthStrip: skull-stripping for any brain image

Affiliations

SynthStrip: skull-stripping for any brain image

Andrew Hoopes et al. Neuroimage. .

Abstract

The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.

Keywords: Brain extraction; Deep learning; Image synthesis; MRI-contrast agnosticism; Skull stripping.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest Bruce Fischl has a financial interest in CorticoMetrics, a company whose medical pursuits focus on brain imaging and measurement technologies. This interest is reviewed and managed by Massachusetts General Hospital and Mass General Brigham in accordance with their conflict-of-interest policies. The authors declare that they have no other known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig. 1.
Fig. 1.
Examples of SynthStrip brain extractions (bottom) for a wide range of image acquisitions and modalities (top). Powered by a strategy for synthesizing diverse training data, SynthStrip learns to skull-strip brain images of any type.
Fig. 2.
Fig. 2.
Samples of synthetic images used for SynthStrip training. To encourage the network to generalize, we synthesize images that far exceed the realistic range of whole-brain acquisitions. In this figure, each brain image is generated from the same label map. In practice, we use label maps from several different subjects.
Fig. 3.
Fig. 3.
SynthStrip training framework. At every optimization step, we sample a randomly transformed brain segmentation st, from which we synthesize a gray-scale image x with arbitrary contrast. The skull-stripping 3D U-Net receives x as input and predicts a thresholded signed distance transform (SDT) d representing the distance of each voxel to the skull boundary. The U-Net consists of skip-connected, multi-resolution convolutional layers illustrated by gray bars, with their number of output filters indicated below. We train SynthStrip in a supervised fashion, maximizing the similarity between d and the ground-truth SDT d^ within a ribbon of set distance around the brain and derived directly from the segmentation labels of st.
Fig. 4.
Fig. 4.
SynthStrip accuracy compared to baseline methods, across all images in the test set. Images are sorted by the score of the top performing skull-stripping method. Each dot represents a single brain mask derived with a particular tool, and each column of dots represents the scores obtained for a single image across tools. See Supplementary Fig. S2 for a version showing each baseline in a different color.
Fig. 5.
Fig. 5.
SynthStrip and baseline skull-stripping performance for near-isotropic, T1w adult MR brain images. Median scores are represented by black dots. For all metrics except sensitivity and specificity, SynthStrip yields optimal brain masks. The high specificity achieved by ROBEX and BEaST comes at the cost of substantial under-segmentation of the brain mask, as indicated by their low sensitivity scores. The inverse is true for FSW, which tends to substantially over-segment the brain. Black dots indicate median scores.
Fig. 6.
Fig. 6.
Considering all non-T1w, thick-slice, and infant images in the evaluation set, SynthStrip surpasses baseline accuracy by a wide margin. In this figure, we include only baselines that generalize to acquisition protocols and modalities beyond the common structural T1w MRI scans. Black dots indicate median scores.
Fig. 7.
Fig. 7.
A: SynthStrip variability across time-series data, measured by percent of discordant voxel locations (DV) across diffusion-encoded directions, relative to the brain mask volume. The ROBEX median % DV extends beyond the chart axis, as indicated by the black arrow. B: Effect of SDT- and Dice-based loss functions during training. A SynthStrip model trained using sdt predicts substantially smoother brain masks (boundaries indicated in orange) than a model trained with dice, resulting in considerably lower maximum surface distance (MD) to ground truth masks and percent of exposed boundary voxels (EBV).
Fig. 8.
Fig. 8.
Representative skull-stripping errors for SynthStrip and baseline methods. White arrows indicate over-labeling of the brain mask, while orange arrows indicate removal of brain matter. SynthStrip errors are uncommon and typically involve including small regions of dura or other extracerebral tissue in the brain mask, if they occur..

References

    1. Andrade N, Faria FA, Cappabianco FAM, 2018. A practical review on medical image registration: From rigid to deep learning based approaches. In: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, pp. 463–470.
    1. Arsigny V, Commowick O, Pennec X, Ayache N, 2006. A log-euclidean framework for statistics on diffeomorphisms. In: MICCAI: Medical Image Computing and Computer Assisted Interventions. Springer, pp. 924–931. - PubMed
    1. Ashburner J, 2007. A fast diffeomorphic image registration algorithm. Neuroimage 38 (1), 95–113 . - PubMed
    1. Ashburner J, 2009. Preparing fMRI Data for Statistical Analysis. In: fMRI techniques and protocols. Springer, pp. 151–178.
    1. Avants BB, Epstein CL, Grossman M, Gee JC, 2008. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal 12 (1), 26–41. - PMC - PubMed

Publication types