Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Dec:143:235-249.
doi: 10.1016/j.neuroimage.2016.09.011. Epub 2016 Sep 7.

Fast and sequence-adaptive whole-brain segmentation using parametric Bayesian modeling

Affiliations

Fast and sequence-adaptive whole-brain segmentation using parametric Bayesian modeling

Oula Puonti et al. Neuroimage. 2016 Dec.

Abstract

Quantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data.

Keywords: Atlases; Bayesian modeling; MRI; Parametric models; Segmentation.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Left: T1-weighted scan from the training data. Center: corresponding manual segmentation. Right: atlas mesh built from 20 randomly selected subjects from the training data.
Figure 2:
Figure 2:
On the left an example slice from the intra-scanner dataset and on the right a corresponding manual segmentation.
Figure 3:
Figure 3:
On the left an example slice from the cross-scanner dataset and on the right a corresponding manual segmentation.
Figure 4:
Figure 4:
An example of the T1- (flip angle = 30°) and PD-weighted (flip angle = 3°) scans of the same subject from the multi-echo dataset.
Figure 5:
Figure 5:
An example of the T1- and T2-weighted scans of the same subject from the test-retest dataset.
Figure 6:
Figure 6:
The Dice scores of the different methods for the intra-scanner (top) and cross-scanner (bottom) data. The proposed method = green, BrainFuse = blue, PICSL MALF = magenta, FreeSurfer = red and Majority Voting=black. Additional results, obtained by preprocessing the input data using the FreeSurfer pipeline, are also shown (filled boxes with broken lines). On each box, the central horizontal line is the median, the circle is the mean, and the edges of the box are the 25th and 75th percentiles. Data points falling outside of the range covered by scaling the box four times are considered outliers, and are plotted individually. The whiskers extend to the most extreme data points that are not considered outliers. See Section 3.4 for the acronyms.
Figure 7:
Figure 7:
Mean Dice scores over the ROIs for the intra-scanner (left) and the cross-scanner (right) data when the different methods are trained using randomly picked subsets of only 5, 10 and 15 training subjects. The error bars correspond to the lowest and highest obtained mean Dice score across the random subsets. The score obtained when all subjects in the training pool are used is also shown for reference (fourth bar of each method). The proposed method (P) is shown in green, BrainFuse (BF) in blue, PICSL MALF (PM) in magenta and majority voting (MV) in black. Additional results, obtained by preprocessing the input data using the FreeSurfer pipeline, are also shown (filled bars with broken lines).
Figure 8:
Figure 8:
Dice scores for the multi-echo dataset. Performance on T1-weighted data is shown in dark green, on PD-weighted data in orange, and on multi-contrast input data in light green. The box plots are drawn in the same way as explained in Figure 6.
Figure 9:
Figure 9:
Top row: target scans, T1-weighted on the left and PD-weighted on the right. Bottom row: automatic segmentation using only the T1-weighted scan on the left, automatic segmentation using both scans on the right.
Figure 10:
Figure 10:
The ASPC scores for the test-retest dataset. Volume differences between the time points on multi-contrast input data is shown in light green, and on T1-weighted data only in dark green. The box plots are drawn in the same way as explained in Figure 6. The outlier marked by an arrow is the one shown in Figure 11.
Figure 11:
Figure 11:
An example of an outlier subject marked by the arrow in Figure 10. Top row from left to right: a T1-weighted scan with no visible artifacts, a T2-weighted scan with a line-like artifact in the pallidum and thalamus area marked by red arrows, and an automated segmentation of pallidum and thalamus showing the segmentation error caused by the artifact. The bottom row shows zoomed figures of the affected area, highlighting vertical lines in the T2-scan that cause jagged borders in the automatic segmentation, resulting in a poor ASPC score for this subject.
Figure 12:
Figure 12:
The ASPC scores of the different methods for 10 randomly chosen subjects from the test-retest dataset. The performance of the proposed method when using only T1-weighted data in dark green and when using both T1- and T2-weighted scans in light green, BrainFuse in blue, PICSL MALF in magenta, FreeSurfer in red and Majority Voting in black. The box plots are drawn in the same way as explained in Figure 6.

References

    1. Aljabar P, Heckemann AR, Hammers A, Hajnal VJ, Rueckert D, 2009. Multi-atlas based segmentation of brain images: Atlas selection and its effect on accuracy. NeuroImage 46, 726–738. - PubMed
    1. Artaechevarria X, Munõz Barrutia A, Ortiz-de Solórzano C, 2009. Combination strategies in multi-atlas image segmentation: Application to brain MR data. IEEE Transactions on Medical Imaging 28, 1266–1277. - PubMed
    1. Ashburner J, Andersson RLJ, Friston JK, 2000. Image registration using a symmetric prior–in three dimensions. Human Brain Mapping 9, 212–225. - PMC - PubMed
    1. Ashburner J, Friston JK, 1997. Multimodal image coregistration and partitioning – a unified framework. NeuroImage 6, 209–217. - PubMed
    1. Ashburner J, Friston JK, 2005. Unified segmentation. NeuroImage 26, 839–885. - PubMed