Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Nov;36(11):2319-2330.
doi: 10.1109/TMI.2017.2721362. Epub 2017 Jun 28.

Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging

Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging

Seyed Sadegh Mohseni Salehi et al. IEEE Trans Med Imaging. 2017 Nov.

Abstract

Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Schematic diagram of the proposed networks: a) The proposed voxelwise architecture for 2D image inputs; b) the network architecture to combine the information of 2D pathways for 3D segmentation; c) the U-net style architecture. The 2D input size for the LPBA40 and fetal MRI datasets was 256 × 256 and for the OASIS dataset was 176 × 176; and d) the auto-context formation of the network to reach the final results using network (a) as example. The context information along with multiple local patches are used to learn local shape information from training data and predict labels for the test data.
Fig. 2
Fig. 2
The Dice coefficient of training at four steps of the auto-context algorithm on all datasets based on the U-net (up) and the voxelwise 2.5D CNN approach (bottom). These plots show that the networks learned the context information through iterations and they converged.
Fig. 3
Fig. 3
Predicted masks overlaid on the data for fetal brain MRI; the top images show the improvement of the predicted brain mask in different steps of the Auto-Net using 2.5D-CNN. The middle images show the improvement of the predicted brain mask in different steps of the Auto-Net using U-Net. The bottom left and right images show the predicted brain masks using BET and 3dSkullStrip, respectively. The right image shows the ground truth manual segmentation. Despite the challenges raised, our method (Auto-Net) performed very well and much better than the other methods in this application. The Dice coefficient, sensitivity, and specificity, calculated based on the ground truth for this case, are shown underneath each image in this figure.
Fig. 4
Fig. 4
Predicted masks overlaid on the reconstructed fetal brain MRI for a challenging case with decent image reconstruction quality and intensity non-uniformity due to B1 field inhomogeneity; the top images show the predicted brain masks by Auto-Net using 2.5D-CNN (left) and U-net (right). The bottom left and right images show the predicted brain masks using BET and 3dSkullStrip, respectively. The right image shows the ground truth manual segmentation. As can be seen, fetal brains can be in non-standard arbitrary orientations. Moreover, the fetal head may be surrounded by different tissue or organs. Despite all these challenges, the Auto-2.5D CNN performed well and much better than the other methods in this case. The Dice coefficient, sensitivity, and specificity, calculated based on the ground truth, are shown underneath each image in this figure.
Fig. 5
Fig. 5
Evaluation scores (Dice, sensitivity, and specificity) for three data sets (LPBA40, OASIS, and fetal MRI). Median is displayed in boxplots; blue crosses represent outliers outside 1.5 times the interquartile range of the upper and lower quartiles, respectively. For the fetal dataset the registration-based algorithms were removed due to their poor performance. Those algorithms were not meant to work for images of this kind with non-standard geometry. Overall, these results show that our methods (Auto-Nets: Auto 2.5D and Auto U-net) made a very good trade-off between sensitivity and specificity and generated the highest Dice coefficients among all methods including the PCNN [12]. The performance of Auto-Nets was consistently superior in the fetal MRI application where the other methods performed poorly due to the non-standard image geometry and features. Using Auto-context algorithm showed significant increase in Dice coefficients in both voxelwise and FCN style networks.
Fig. 6
Fig. 6
Logarithmic-scale absolute error maps of brain extraction obtained from six algorithms on the LPBA40 dataset. This analysis shows that Auto-Nets performed much better than the other methods in this dataset.

References

    1. Makropoulos A, Gousias IS, Ledig C, Aljabar P, Serag A, Hajnal JV, Edwards AD, Counsell SJ, Rueckert D. Automatic whole brain MRI segmentation of the developing neonatal brain. IEEE transactions on medical imaging. 2014;33(9):1818–1831. - PubMed
    1. Li G, Wang L, Shi F, Lyall AE, Lin W, Gilmore JH, Shen D. Mapping longitudinal development of local cortical gyrification in infants from birth to 2 years of age. The Journal of Neuroscience. 2014;34(12):4228–4238. - PMC - PubMed
    1. MacDonald D, Kabani N, Avis D, Evans AC. Automated 3D extraction of inner and outer surfaces of cerebral cortex from MRI. NeuroImage. 2000;12(3):340–356. - PubMed
    1. Clouchoux C, Kudelski D, Gholipour A, Warfield SK, Viseur S, Bouyssi-Kobar M, Mari J-L, Evans AC, Du Plessis AJ, Limperopoulos C. Quantitative in vivo MRI measurement of cortical development in the fetus. Brain Structure and Function. 2012;217(1):127–139. - PubMed
    1. de Brebisson A, Montana G. Deep neural networks for anatomical brain segmentation; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops; 2015. pp. 20–28.