Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Apr;35(4):1077-89.
doi: 10.1109/TMI.2015.2508280. Epub 2015 Dec 11.

Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching

Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching

Yanrong Guo et al. IEEE Trans Med Imaging. 2016 Apr.

Abstract

Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
(a) Typical T2-weighted prostate MR images. Red contours indicate the prostate glands delineated manually by an expert. (b) Intensity distributions of prostate and background voxels around the prostate boundary of (a). (c) The 3D illustrations of prostate surfaces corresponding to each image in (a).
Fig. 2
Fig. 2
The prostate shape distribution obtained from the PCA analysis.
Fig. 3
Fig. 3
The schematic description of our proposed segmentation framework.
Fig. 4
Fig. 4
The similarity maps computed between a reference voxel (red cross) in the target image (a) and all voxels in the atlas image (b) by the four handcrafted feature representations, i.e., intensity (c), Haar (d), HOG (e) and LBP (f), as well as the two deep learning feature representations, namely unsupervised SSAE (g) and the supervised SSAE (h). White contours indicate the prostate boundaries, and the black dashed crosses indicate the ground-truth point in (b), which is corresponding to the red cross in (a).
Fig. 5
Fig. 5
Construction of the basic AE.
Fig. 6
Fig. 6
The low-level feature representation learned from the SAE. Here, we reshape each row in W into the size of image patch, and only visualize its first slice as an image filter.
Fig. 7
Fig. 7
Construction of the unsupervised SSAE with R stacked SAEs.
Fig. 8
Fig. 8
Typical prostate image patches (a) and their reconstructions (b) by using the unsupervised SSAE with four stacked SAEs.
Fig. 9
Fig. 9
Construction of the supervised SSAE with a classification layer, which fine-tunes the SSAE with respect to the task of voxel-wise classification between prostate (label = 1) and background (label = 0).
Fig. 10
Fig. 10
Visualization of typical feature representations of the first hidden layer (first row) and second hidden layer (second row) for the unsupervised pre-training (a) and supervised fine-tuning (b), respectively.
Fig. 11
Fig. 11
The schematic description of sparse patch matching
Fig. 12
Fig. 12
Five typical T2-weighted MR prostate images acquired from different scanners, showing large variations of both prostate appearance and shape, especially for the cases with or without using the endorectal coils.
Fig. 13
Fig. 13
Distributions of voxel samples by using four types of features: (a) intensity, (b) handcrafted, (c) unsupervised SSAE, and (d) supervised SSAE. Red crosses and green circles denote prostate and non-prostate voxel samples, respectively.
Fig. 14
Fig. 14
(a) Typical slices of T2 MR images with manual segmentations. The likelihood maps produced by sparse patch matching with four feature representations: (b) intensity patch, (c) handcrafted, (d) unsupervised SSAE, and (e) supervised SSAE. Red contours indicate the manual ground-truth segmentations.
Fig. 15
Fig. 15
Typical prostate segmentation results of the same patients produced by four different feature representations: (a) intensity, (b) handcrafted, (c) unsupervised SSAE, and (d) supervised SSAE. Three rows show the results for three different slices of the same patient, respectively. Red contours indicate the manual ground-truth segmentations, and yellow contours indicate the automatic segmentations.
Fig. 16
Fig. 16
Typical prostate segmentation results of three different patients produced by four different feature representations: (a) intensity, (b) handcrafted, (c) unsupervised SSAE, and (d) supervised SSAE. Three odd rows show the results for three different patients, respectively. Red contours indicate the manual ground-truth segmentations, and yellow contours indicate the automatic segmentations. Three even rows show the 3D visualization of the segmentation results corresponding to the images above. For each 3D visualization, the red surfaces indicate the automatic segmentation results using different features, such as intensity, handcrafted, unsupervised SSAE and supervised SSAE, respectively. The transparent grey surfaces indicate the ground-truth segmentations.

References

    1. Prostate Cancer. Available: http://www.cancer.org/acs/groups/cid/documents/webcontent/003134-pdf.pdf.
    1. Pondman KM, Fütterer JJ, ten Haken B, Schultze Kool LJ, Witjes JA, Hambrock T, et al. MR-guided biopsy of the prostate: an overview of techniques and a systematic review. European Urology. 2008;54:517–527. - PubMed
    1. Hricak H, Wang L, Wei DC, Coakley FV, Akin O, Reuter VE, et al. The role of preoperative endorectal magnetic resonance imaging in the decision regarding whether to preserve or resect neurovascular bundles during radical retropubic prostatectomy. Cancer. 2004;100:2655–2663. - PubMed
    1. Liao S, Gao Y, Shi Y, Yousuf A, Karademir I, Oto A, et al. Automatic prostate MR image segmentation with sparse label propagation and domain-specific manifold regularization. In: Gee J, Joshi S, Pohl K, Wells W, Zöllei L, editors. Information Processing in Medical Imaging. Vol. 7917. Springer; Berlin Heidelberg: 2013. pp. 511–523. - PMC - PubMed
    1. Yan P, Cao Y, Yuan Y, Turkbey B, Choyke PL. Label Image Constrained Multiatlas Selection. Cybernetics, IEEE Transactions on. 2015;45:1158–1168. - PMC - PubMed

Publication types