Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Feb 1:206:116324.
doi: 10.1016/j.neuroimage.2019.116324. Epub 2019 Nov 6.

An automated framework for localization, segmentation and super-resolution reconstruction of fetal brain MRI

Affiliations

An automated framework for localization, segmentation and super-resolution reconstruction of fetal brain MRI

Michael Ebner et al. Neuroimage. .

Abstract

High-resolution volume reconstruction from multiple motion-corrupted stacks of 2D slices plays an increasing role for fetal brain Magnetic Resonance Imaging (MRI) studies. Currently existing reconstruction methods are time-consuming and often require user interactions to localize and extract the brain from several stacks of 2D slices. We propose a fully automatic framework for fetal brain reconstruction that consists of four stages: 1) fetal brain localization based on a coarse segmentation by a Convolutional Neural Network (CNN), 2) fine segmentation by another CNN trained with a multi-scale loss function, 3) novel, single-parameter outlier-robust super-resolution reconstruction, and 4) fast and automatic high-resolution visualization in standard anatomical space suitable for pathological brains. We validated our framework with images from fetuses with normal brains and with variable degrees of ventriculomegaly associated with open spina bifida, a congenital malformation affecting also the brain. Experiments show that each step of our proposed pipeline outperforms state-of-the-art methods in both segmentation and reconstruction comparisons including expert-reader quality assessments. The reconstruction results of our proposed method compare favorably with those obtained by manual, labor-intensive brain segmentation, which unlocks the potential use of automatic fetal brain reconstruction studies in clinical practice.

Keywords: Brain localization; Convolutional neural network; Deep learning; Fetal MRI; Segmentation; Super resolution.

PubMed Disclaimer

Conflict of interest statement

WL was employed by King's College London during most of the preparation of this work and was employed by the company Nvidia for the final editing and proofreading of the manuscript. SO is a founder and shareholder of BrainMiner Ltd, UK.

Figures

Fig. 1
Fig. 1
Three example stacks of MRI of fetuses with spinal bifida (a)–(c) with gestational ages of 24, 24 and 29 weeks, respectively. (a) Has a consistent appearance with small inter-slice motion. (b) Shows motion between two interleaved sub-stacks. (c) Illustrates artifact-affected slices with two such ‘outlier’ slices shown in (d) and (e).
Fig. 2
Fig. 2
Comparison of a normal fetus and a fetus with open spina bifida showing a Chiari II malformation with ventriculomegaly. Image courtesy of UZ Leuven.
Fig. 3
Fig. 3
The proposed fully automatic framework for fetal brain MRI reconstruction to obtain high-resolution (HR) visualizations in standard anatomical planes from multiple low-resolution (LR) input stacks. The automatic localization, segmentation and reconstruction parts are detailed in Figs. 4, Figs. 5 and 6, respectively.
Fig. 4
Fig. 4
The proposed fetal brain localization method using a CNN (Loc-Net) to obtain a coarse segmentation followed by 3D bounding box fitting.
Fig. 5
Fig. 5
The proposed fetal brain segmentation method using a CNN (Seg-Net) that works on the localization result. We propose to use a multi-scale loss function to train Seg-Net.
Fig. 6
Fig. 6
The proposed outlier-robust high-resolution volume reconstruction method for fetal brain MRI. As part of a two-step motion-correction/volumetric reconstruction cycle, we propose an effective robust SRR method for complete outlier rejection that relies on a single hyperparameter only and retains a linear least-squares formulation. A fast template-space alignment, which is robust also for pathological brains, is achieved by using a principal brain axes (PBA)-initialized rigid volume-to-template registration based on symmetric block-matching.
Fig. 7
Fig. 7
Visual comparison of different methods for fetal brain localization. The three rows show examples from Group A (controls), B1 (pre-surgical spina bifida), and B2 (post-surgical spina bifida), respectively. Column 1–6: in-plane. Column 7–12: through-plane. Yellow: ground truth. Green: detection results.
Fig. 8
Fig. 8
Distribution of gestational age in the experimental fetal image set.
Fig. 9
Fig. 9
Visual comparison of different methods for fetal brain segmentation. The three rows show examples from Group A (controls), B1 (pre-surgical spina bifida), and B2 (post-surgical spina bifida), respectively. Column 1–5: in-plane. Column 6–10: through-plane. Yellow: ground truth. Green: segmentation results.
Fig. 10
Fig. 10
Quantitative evaluation of different methods for fetal brain localization.
Fig. 11
Fig. 11
Quantitative evaluation of different methods for fetal brain segmentation.
Fig. 12
Fig. 12
Fetal brain segmentation performance obtained by our multi-scale loss function using different number of scales S. The results are based on validation images from Group A and B1.
Figure 13
Figure 13
Comparison of SRR (S) with overlaid SRR (L)/(M)/(S) high-resolution masks obtained using either the manual masks (SRR (M); blue colour), the automatic segmentations by Seg-Net (SRR (S); differences to SRR (M) in green colour) or the localization results by Loc-Net (SRR (L); differences to SRR (M)/(S) in red colour). The respective visualizations of SRR (S) were obtained by reconstructing the entire template-space field of view using the brain-motion corrected slice transformations transformed into the template space. The last row shows the only B1-case that failed in the template-space alignment step for SRR (S), see Table 2; the final alignment was obtained after manual re-initialization of the volume-to-template registration.
Fig. 14
Fig. 14
Qualitative comparison of reconstruction methods in the subject space. Visual comparisons of different reconstruction methods for a B1 (left) and an A (right) case where challenging target stacks were (automatically) selected. Additional visualizations associated with the For the group A case (b), additional visualizations are provided to assess the outlier-rejection performance (Fig. 16) and for template space comparisons (Inline Supplementary Fig. S6). Dilated SRR (M) masks were used for visual cropping. SRR (M) without outlier rejection (OR) presents various artifacts. Similarly, the localization masks as used for SRR (L) lead to poor reconstruction outcomes despite the use of outlier rejection. The outlier-robust results SRR (M) and the proposed SRR (S) based on manual and automated brain masks, respectively, provide successful reconstructions and are, visually, almost indistinguishable. Green arrows indicate artifacts in SRR (M) without OR that are eliminated using our proposed OR method. Red arrows show differences between our proposed method and Kainz et al. (M).
Fig. 15
Fig. 15
Histogram relating the number of slice rejections with the average slice motion per stack. The mean values of the 2-norm of translation tx,ty,tz (mm) and rotation rx,ry,rz (degree) parameters of the non-rejected slices for each individual stack after the final motion correction iteration i=3 for SRR (S) for all 39 cases are shown. The stack associated with the sample in the upper-left corner is shown in Fig. 16.
Fig. 16
Fig. 16
Stack associated with the upper-left corner in Fig. 15 showing substantial in-plane artifacts with relatively moderate slice motion for the non-rejected slices. Red crosses mark the slices that were automatically rejected by the proposed outlier-robust SRR (S) algorithm (only the slices covering the brain are shown; additional six, automatically segmented slices outside the brain were successfully rejected too). The NCC slice similarities Sim(yki,Akixi1)<βi, at the time of rejection at iteration i{1,2,3} with (β1,β2,β3)=(0.5,0.65,0.8) are shown in addition. Thus, the outlier-rejection method is able to successfully detect and reject artifact-corrupted slices while keeping slices with good in-plane quality for the final volumetric reconstruction step. It is worth noting that this stack served as the target stack for the SRR algorithm. Successful reconstructions in subject and template spaces for that case are shown in Fig. 14b and Inline Supplementary Fig. S6, respectively.
Fig. 17
Fig. 17
Quantitative comparison of different reconstruction methods based on Sim(yki,Akixi) after the final SVR-SRR iteration (i=3) in terms of SSIM and PSNR. A * denotes a significant difference compared to SRR (M) within each group based on Kruskal-Wallis with post-hoc Dunn tests (p<0.05). Thus, SRR (S) and SRR (M) appear of similar volumetric self-consistency as quantified by the similarities between motion-corrected and respectively projected high-resolution volume slices.
Fig. 18
Fig. 18
Summary of clinical evaluation. Two radiologists performed a qualitative assessment of the obtained high-resolution reconstructions regarding anatomical clarity, SRR quality and subjective preference involving 39 cases. A higher score indicates a better outcome. For anatomical clarity scores indicate how well CS, CAIF and LCF are visualized in each image with ratings 0 (structure not seen), 1 (poor depiction), 2 (suboptimal visualization; image not adequate for diagnostic purposes), 3 (clear visualization of structure but reduced tissue contrast; image-based diagnosis feasible), and 4 (excellent depiction; optimal for diagnostic purposes). SRR quality is a combined average score of individual visible artifacts and blur scores with ratings 0 (lots of artifacts/blur) to 2 (no artifact/blur). Radiologists' preference ranks subjectively from the least (0) to the most preferred (2) reconstruction. A * denotes a significant difference compared to SRR (M) based on a Wilcoxon signed-rank test (p<0.05). The results underline that SRR (M)/(S) represent high-quality reconstructions with high anatomical clarity that are visually indistinguishable and were subjectively preferred over Kainz et al. (M) by the two radiologists.
Fig. 19
Fig. 19
Qualitative comparison of reconstruction methods in the template space. The comparison shows the template space reconstructions of a group B2 subject (post-surgical SB, GA=27 weeks) based on 7 low-resolution input stacks. An original stack (linearly resampled) with resolution of 0.472×3 mm3 is provided for reference. Red arrows show anatomical differences between SRR (S) and Kainz et al. (M).
Fig. 20
Fig. 20
Qualitative comparison of reconstruction methods in the template space. The comparison shows the template space reconstructions of a group B2 subject (post-surgical SB, GA=26 weeks) based on 4 low-resolution input stacks. An original stack (linearly resampled) with resolution of 0.742×3 mm3 is provided for reference. Green arrows indicate the rejection of the final intensity-artifacted slice of the original stack using the outlier-threshold β=0.85. Red arrows show anatomical differences between SRR (S) and Kainz et al. (M) in direct comparison with the original stack.
Fig. 21
Fig. 21
Comparison of obtained reconstructions in the template space for six different input data configurations using the case with the highest number of nine available input stacks (B1 subject, pre-surgical SB, GA=25 weeks). The horizontal axis for the quantitative comparisons is sorted in ascending order based on the NCC outcome, whereby “1a+3c+5s” constrained by its mask was used as reference. Using at least three stacks in three different orientations leads to a high anatomical detail in all three anatomical planes. Increasing the number of stacks per orientation can further increase the reconstruction quality. Additional comparisons for other cases are shown in Inline Supplementary Figs. S11 and S12.

References

    1. Aertsen M., Verduyckt J., De Keyzer F., Vercauteren T., Van Calenbergh F., De Catte L., Dymarkowski S., Demaerel P., Deprest J. Reliability of MR imaging-based posterior fossa and brain stem measurements in open spinal dysraphism in the era of fetal surgery. Am. J. Neuroradiol. 2019;40:191–198. http://www.ajnr.org/lookup/doi/10.3174/ajnr.A5930 - DOI - PMC - PubMed
    1. Alansary A., Rajchl M., McDonagh S.G., Murgasova M., Damodaram M., Lloyd D.F.A., Davidson A., Rutherford M., Hajnal J.V., Rueckert D., Kainz B. PVR: patch-to-volume reconstruction for large area motion correction of fetal MRI. IEEE Trans. Med. Imaging. 2017;36:2031–2044. http://arxiv.org/abs/1611.07289 http://ieeexplore.ieee.org/document/8024032/ arXiv:1611.07289. - PMC - PubMed
    1. Anquez J., Angelini E.D., Bloch I. 2009 IEEE International Symposium on Biomedical Imaging: from Nano to Macro. IEEE; 2009. Automatic segmentation of head structures on fetal MRI; pp. 109–112.http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5192995
    1. Candès E.J., Li X., Ma Y., Wright J. Robust principal component analysis? J. Assoc. Comput. Mach. 2011;58:1–37. http://portal.acm.org/citation.cfm?doid=1970392.1970395
    1. Çiçek Ö., Abdulkadir A., Lienkamp S.S., Brox T., Ronneberger O. 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin S., Joskowicz L., Sabuncu M.R., Unal G., Wells W., editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. Springer International Publishing; Cham: 2016. pp. 424–432.http://link.springer.com/10.1007/978-3-319-46723-8{∖_}49 - DOI

Publication types