Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Jan 15:125:479-497.
doi: 10.1016/j.neuroimage.2015.10.013. Epub 2015 Oct 19.

Automatic segmentation of the striatum and globus pallidus using MIST: Multimodal Image Segmentation Tool

Affiliations

Automatic segmentation of the striatum and globus pallidus using MIST: Multimodal Image Segmentation Tool

Eelke Visser et al. Neuroimage. .

Abstract

Accurate segmentation of the subcortical structures is frequently required in neuroimaging studies. Most existing methods use only a T1-weighted MRI volume to segment all supported structures and usually rely on a database of training data. We propose a new method that can use multiple image modalities simultaneously and a single reference segmentation for initialisation, without the need for a manually labelled training set. The method models intensity profiles in multiple images around the boundaries of the structure after nonlinear registration. It is trained using a set of unlabelled training data, which may be the same images that are to be segmented, and it can automatically infer the location of the physical boundary using user-specified priors. We show that the method produces high-quality segmentations of the striatum, which is clearly visible on T1-weighted scans, and the globus pallidus, which has poor contrast on such scans. The method compares favourably to existing methods, showing greater overlap with manual segmentations and better consistency.

Keywords: Brain; Globus pallidus; Huntington; Multimodal; Segmentation; Striatum.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Different intensities (white and dark grey) around the structure to be segmented (light grey).
Fig. 2
Fig. 2
Profile model for a single vertex on the inferior boundary of the putamen. First column: specified priors (green: first component, blue: second component, red: third component). Second column: profiles as sampled in the 57 training subjects before edge-based alignment, but after initial registration. Third column: MAP estimate of component means and aligned profiles. The effect of alignment is especially clear when comparing panel 2 and 3 on the FA row; note the overall shift of about 1 mm. Columns 4–6: MAP estimates of mean and standard deviation for all components. Rows correspond to the modalities that were used in the model. Row 1: T2-weighted, row 2: T1-weighted, row 3: FA. Note that the observed profiles are shorter than the mean profiles; this corresponds to the lengths k′ and k in the main text.
Fig. 3
Fig. 3
Example subject (499566) in the HCP80 dataset showing segmentations of putamen (red), globus pallidus (green) and caudate + nucleus accumbens (blue) on axial (top two rows) and coronal (bottom two rows) slices. The FA volume was not used for segmenting the caudate nucleus and nucleus accumbens.
Fig. 4
Fig. 4
Segmentation results for the globus pallidus in the 7 T dataset using all three modalities (red) and T1-weighted only (green). Top three rows: axial slices, bottom three rows: coronal slices. The QSM volume was not used for segmenting the caudate nucleus and nucleus accumbens.
Fig. 5
Fig. 5
Dice overlap (first column) with manual segmentations and mean mesh distance in mm (second column) of segmentations produced by different methods in HCP 80 dataset (10 subjects). Correlation coefficients between manual and automatic mask volumes are shown in the third column. MIST: multimodal segmentation, T1: MIST with T1-weighted images only, Ft −/+: FIRST without and with boundary correction, FS: FreeSurfer. Data points that are outside the box by more than 1.5 times the interquartile range are treated as outliers. A significant difference in performance between a method and MIST is denoted by an asterisk (p ≤ 0.05, Wilcoxon signed rank test for the boxplots and Williams's test for the correlation coefficients. Computed using R, http://www.r-project.org/).
Fig. 6
Fig. 6
Example masks for the striatum using different methods in an example subject (499566) from the HCP80 dataset. Green: manual labelling, red: automatic segmentation, yellow: overlap. BC denotes boundary correction.
Fig. 7
Fig. 7
Dice overlap (first column) with manual segmentations and mean mesh distance in mm (second column) of segmentations produced by different methods in the 7 T dataset (29 subjects). Correlation coefficients between manual and automatic mask volumes are shown in the third column. MIST: multimodal segmentation, T1: MIST with T1-weighted images only, FtS −/+: FIRST without and with boundary correction on limited FOV MP2RAGE data, FtW −/+: FIRST without and with boundary correction on whole brain MP2RAGE data. Data points that are outside the box by more than 1.5 times the interquartile range are treated as outliers. A significant difference in performance between a method and MIST is denoted by an asterisk (p ≤ 0.05).
Fig. 8
Fig. 8
Example patient in the HD dataset showing segmentations of putamen (red), globus pallidus (green) and caudate + nucleus accumbens (blue) on axial (top row) and coronal (bottom row) slices.
Fig. 9
Fig. 9
Dice overlap (first column) with manual segmentations and mean mesh distance in mm (second column) of segmentations produced by different methods in the HD dataset (16 subjects). Correlation coefficients between manual and automatic mask volumes are shown in the third column. Data points that are outside the box by more than 1.5 times the interquartile range are treated as outliers. A significant difference in performance between a method and MIST is denoted by an asterisk (p ≤ 0.05).
Fig. 10
Fig. 10
Dice overlap (first column) with manual segmentations and mean mesh distance in mm (second column) of segmentations produced after training on subsets of different sizes in the HCP80 dataset. Correlation coefficients between manual and automatic mask volumes are shown in the third column. Subsets of the full training set of 57 subjects were used to investigate how segmentation performance changes with smaller numbers of training subjects. A significant difference in performance between a subset and the full training set is denoted by an asterisk (p ≤ 0.05).
Fig. 11
Fig. 11
Segmentation of the putamen in an example subject in the HCP80 dataset for different numbers of training subjects. Red outline: automatic segmentation, blue overlay: manual labelling.
Fig. 12
Fig. 12
Dice overlap (first column) with manual segmentations and mean mesh distance in mm (second column) of segmentations produced in the HCP80 dataset using MIST and by using non-linear registration of the reference shape only (NLR). Correlation coefficients between manual and automatic mask volumes are shown in the third column. A significant difference in performance between NLR and MIST is denoted by an asterisk (p ≤ 0.05).

References

    1. Andersson J., Skare S., Ashburner J. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage. 2003;20:870–888. - PubMed
    1. Andersson J., Jenkinson M., Smith S. FMRIB Tech Rep TR07JA2. 2010. Non-linear registration, aka spatial normalisation.
    1. Andersson J., Xu J., Yacoub E., Auerbach E., Moeller S., Ugurbil K. Proc. 12th Annu. Meet. Int. Soc. Magn. Reson. Med. 2012. A comprehensive Gaussian process framework for correcting distortions and movements in diffusion images; p. 2426.
    1. Asl A.A., Soltanian-Zadeh H. 2008 5th IEEE Int Symp Biomed Imaging From Nano to Macro, Proceedings, ISBI. 2008. Constrained optimization of nonparametric entropy-based segmentation of brain structures; pp. 41–44.
    1. Barra V., Boire J.Y. Automatic segmentation of subcortical brain structures in MR images using information fusion. IEEE Trans. Med. Imaging. 2001;20(7):549–558. - PubMed

Publication types