Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jul;41(7):072303.
doi: 10.1118/1.4884224.

Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

Affiliations

Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

Yanrong Guo et al. Med Phys. 2014 Jul.

Abstract

Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation.

Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace.

Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison.

Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.

PubMed Disclaimer

Figures

FIG. 1.
FIG. 1.
Complicated non-Gaussian distribution of appearance features in MR prostate images. (a) A typical slice of a T2-weighted MR prostate image. (b) Joint distribution of intensity and gradient of voxels within prostate regions across ten subjects. (c) The histogram of gradients within prostate regions across ten subjects. (d) Prostate shape distribution along the two major shape variation modes, corresponding to the two eigenvectors with the largest eigenvalues by PCA. (e) Shape models obtained for different patients, which demonstrates the large interpatient shape variations.
FIG. 2.
FIG. 2.
The schematic description of our deformable segmentation framework.
FIG. 3.
FIG. 3.
Diagram of discriminative dictionary learning framework. Each discriminative dictionary is in charge of tissue differentiation in a subsurface of prostate. Training a discriminative dictionary includes dictionary learning with mRMR feature selection followed by LDA learning.
FIG. 4.
FIG. 4.
Illustration of distributed dictionary learning. (a) Diagram of distributed dictionaries: A schematic explanation of distributed discriminative dictionaries, with each taking charge of tissue differentiation in a local region. (b) Surface parcellation: The partition of our deformable model, where different subsurfaces are indicated by different colors.
FIG. 5.
FIG. 5.
Ensemble-classifier scheme of dictionary learning. In the training stage, by further dividing the entire training sample dataset into training and validation sets, we can train Y classifiers by performing DDD learning and further test the performance of each.
FIG. 6.
FIG. 6.
Five typical examples of T2-weighted MR prostate images. Due to the existence of partial volume effects and interpatient differences, there are large variations on both prostate appearance and shape in the dataset.
FIG. 7.
FIG. 7.
A typical slice of a T2-weighted MR image with manual segmentation (a) and its classification results by four dictionary-learning methods, GSD (b), GDD (c), DSD (d), and DDD (e), respectively.
FIG. 8.
FIG. 8.
ROC curves of tissue classification using four different dictionary-learning methods. (Right) the complete ROC curves; (Left) A zoomed-in figure to show the top part of the ROC, which is indicated by the small rectangle.
FIG. 9.
FIG. 9.
Classification results produced by four dictionary-learning methods with ensemble learning. (a) GSD. (b) GDD. (c) DSD. (d) DDD.
FIG. 10.
FIG. 10.
ROC curves of tissue classification by integrating four different learning methods with ensemble learning. (Right) the complete ROC curves; (Left) A zoomed-in figure to show the top part of the ROC as indicated by the small rectangle.
FIG. 11.
FIG. 11.
Diagrams of DSC (a), sensitivity (b), PPV (c), and ASD (d) measures of our proposed deformable model on all 75 T2-weighted MR images.
FIG. 12.
FIG. 12.
The box and whisker diagram of (a) DSC, sensitivity, PPV, and (b) ASD measures of our proposed deformable model for all 75 images.
FIG. 13.
FIG. 13.
Typical segmentation results for prostate apex (left), central (middle), and base (right) regions of two patients produced by (a) ASM and (b) our proposed deformable model. The first row demonstrates the segmentation results for ASM, and the second row demonstrates the segmentation results for our proposed deformable model. The three main columns show the segmentation results for the apex, central, and base regions of the two patients, respectively. Light grey contours indicate the manual segmentations, and dark grey contours indicate the automatic segmentations.
FIG. 14.
FIG. 14.
The box and whisker diagrams of DSC, measured at the apex, central, and base regions of the prostate, by (a) ASM and (b) our proposed deformable model for all 75 images.
FIG. 15.
FIG. 15.
Typical segmentation results by our proposed deformable model. Each row shows the prostate of one subject automatically segmented by our method (white) and manually delineated by an expert (grey). Different columns indicate different transversal slices from the apex (left) to the base (right) of the prostate.

Similar articles

Cited by

References

    1. Shukla-Dave A. and Hricak H., “Role of MRI in prostate cancer detection,” NMR Biomed. 27, 16–24 (2013).10.1002/nbm.2934 - DOI - PubMed
    1. Seifabadi R.et al., “Accuracy study of a robotic system for MRI-guided prostate needle placement,” Int. J. Med. Rob. Comput. Assist. Surg. 9, 305–316 (2012).10.1002/rcs.1440 - DOI - PMC - PubMed
    1. Blumenfeld P.et al., “Transperineal prostate biopsy under magnetic resonance image guidance: A needle placement accuracy study,” J. Magn. Reson. Imaging 26, 688–694 (2007).10.1002/jmri.21067 - DOI - PubMed
    1. Zhan Y.et al., “Registering histologic and MR images of prostate for image-based cancer detection,” Acad. Radiol. 14, 1367–1381 (2007).10.1016/j.acra.2007.07.018 - DOI - PMC - PubMed
    1. Fedorov A.et al., “Image registration for targeted MRI-guided transperineal prostate biopsy,” J. Magn. Reson. Imaging 36, 987–992 (2012).10.1002/jmri.23688 - DOI - PMC - PubMed

Publication types