Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jul 1:194:105-119.
doi: 10.1016/j.neuroimage.2019.03.041. Epub 2019 Mar 23.

3D whole brain segmentation using spatially localized atlas network tiles

Affiliations

3D whole brain segmentation using spatially localized atlas network tiles

Yuankai Huo et al. Neuroimage. .

Abstract

Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).

Keywords: Brain segmentation; Deep learning; Label fusion; Multi-atlas; Network tiles.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
The proposed SLANT-27 (27 network tiles) whole brain segmentation method is presented, which combines canonical medical image processing (registration, harmonization, label fusion) with 3D network tiles. 3D U-Net framework is used as each tile, whose deconvolutional channel numbers are increased to 133. The tiles are spatially overlapped in MNI space, whose intensity inputs and segmentation outputs for one tile are visualized.
Figure 2.
Figure 2.
This figure presents the SLANT-8 and SLANT-27. SLANT-8 covered eight non-overlapped sub-spaces in MNI, while SLANT-27 covered 27 overlapped sub-spaces in MNI. Middle coronal slices from all 27 sub-spaces were visualized (lower panel). The number of overlays as well as sub-spaces’ overlays were showed (middle panels). The incorrect labels (red arrow) in one sub-space were corrected in final segmentation by performing majority vote label fusion.
Figure 3.
Figure 3.
This figure demonstrates the major components for different segmentation methods. “(45)” indicated the 45 OASIS manually traced images were used in training, while “(5111)” indicated the 5111 auxiliary label images were used in training. The joint label fusion (JLF) and nonlocal spatial STAPLE (NLSS) methods were used as baseline methods.
Figure 4.
Figure 4.
Qualitative results of manual segmentation, multi-atlas segmentation methods, patch based DCNN method, HC-Net, U-Net approaches, and proposed SLANT methods.
Figure 5.
Figure 5.
Sensitivity results of training SLANT-8 and SLANT-27. The mean Dice similarity coefficient (DSC) between automatic methods and manual segmentations for different training epochs were showed as boxplots. The Left panels showed the segmentation performance on five OASIS validation cohort using 5111 auxiliary labeled scans. The best performance was from epoch 5, which was used as initial parameters for fine-tuning, whose performance was showed in the right panels. As a result, the model at epoch 28 after fine-tuning was used for SLANT-8 and SLANT-27.
Figure 6.
Figure 6.
Quantitative results of baseline methods and proposed SLANT methods. The mean Dice similarity coefficient (DSC) between automatic methods and manual segmentations for all testing subjects were showed as boxplots. The SLANT-27 using 5111 auxiliary labels for pretraining and fine-tuned (“FT”) by 45 manual labels achieved highest median DSC values and was used as reference method (“REF”) in statistical analysis. If the difference to REF was significant from Wilcoxon signed test, the method was marked with “*” symbol.
Figure 7.
Figure 7.
Quantitative results of baseline methods and proposed SLANT methods. The mean surface distance (MSD) between automatic methods and manual segmentations for all testing subjects were showed as boxplots. The SLANT-27 using 5111 auxiliary labels for pretraining and fine-tuned (“FT”) by 45 manual labels achieved lowest median MSD values and was used as reference method (“REF”) in statistical analysis. If the difference to REF was significant from Wilcoxon signed rank test, the method was marked with “*” symbol.
Figure 8.
Figure 8.
The screenshot of Docker output report designed by us. The users will able to review the segmentation quality immediately after the scan.
Figure 9.
Figure 9.
Quantitative results of all regions of interests (ROIs) by comparing SLANT-27 and representative baseline methods.

References

    1. Aljabar P, Heckemann RA, Hammers A, Hajnal JV, Rueckert D, 2009. Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy. Neuroimage 46, 726–738. - PubMed
    1. Artaechevarria X, Munoz-Barrutia A, Ortiz-de-Solorzano C, 2009. Combination strategies in multi-atlas image segmentation: application to brain MR data. IEEE Trans Med Imaging 28, 1266–1277. - PubMed
    1. Asman AJ, Dagley AS, Landman BA, 2014. Statistical label fusion with hierarchical performance models. Proc Soc Photo Opt Instrum Eng 9034, 90341E. - PMC - PubMed
    1. Asman AJ, Huo Y, Plassard AJ, Landman BA, 2015. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data. Medical image analysis 26, 82–91. - PMC - PubMed
    1. Asman AJ, Landman BA, 2013a. Non-local statistical label fusion for multi-atlas segmentation. Medical image analysis 17, 194–208. - PMC - PubMed

Publication types

MeSH terms