Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2020 Jun 4:22:127-153.
doi: 10.1146/annurev-bioeng-060418-052147. Epub 2020 Mar 13.

Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation

Affiliations
Review

Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation

John A Onofrey et al. Annu Rev Biomed Eng. .

Abstract

Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification.

Keywords: dictionary learning; image representation; image segmentation; machine learning; medical image analysis; sparsity.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Dynamical dictionary updating interlaced with sequential segmentation. Ii is the image of frame i, si is the segmentation of frame i, and Dij represents multiscale appearance dictionaries for class j in frame i. Figure adapted from Reference with permission from Elsevier.
Figure 2
Figure 2
Typical segmentations of endocardium (red) and epicardium (purple) from four-dimensional echocardiography using the dynamical appearance model approach based on sparse dictionary learning. Two-dimensional slices through three-dimensional results show a comparison between expert manual tracing (green) and the algorithm, with excellent concordance. Figure adapted from Reference with permission from Elsevier.
Figure 3
Figure 3
Dictionary learning is used to learn a sparse appearance model in order to segment the cortical brain surface in postsurgical computed tomography (CT) images in epilepsy patients. Image patches are oriented according to the surface’s local differential geometry, and the appearance both inside and outside the brain surface is used to train two dictionaries of appearance from a set of training data. The dictionary models are then used to drive the segmentation process of the cortical surface in test postop CT images.
Figure 4
Figure 4
An example of postsurgical computed tomography cortical surface segmentation results for a single subject using sparse dictionary learning of locally oriented image appearance (blue contour) compared with ground truth segmentation (yellow contour). Arrows indicate accurate cortical surface segmentation at the areas of the implanted surface electrodes near the site of the craniotomy, which is the region of greatest clinical interest. Axial images progress from the bottom of the head (left) to the top (right).
Figure 5
Figure 5
Flow chart of the SRC and DDLS methods (76). One target voxel is labeled by three different methods: patch-based labeling, SRC, and DDLS. The red box in the target image represents the target patch. The blue boxes in the atlas images represent the search volume area for extracting template patches. Abbreviations: DDLS, discriminative dictionary learning for segmentation; SRC, sparse representation classification. Figure adapted from Reference with permission from Elsevier.
Figure 6
Figure 6
Method comparison. Segmentation results were obtained by DDLS, SRC, and the patch-based method for the subjects from the ADNI data set, with the best-case, median, and worst-case Dice coefficient results depicted. Abbreviations: ADNI, Alzheimer’s Disease Neuroimaging Initiative; DDLS, discriminative dictionary learning for segmentation; SRC, sparse representation classification. Figure adapted from Reference with permission from Elsevier.
Figure 7
Figure 7
Comparison between SSC liver segmentation from low-dose CT and segmentation approaches based on other shape models (all using the same training data). (First row) Procrustes analysis, rigid + scaling. (Second row) Thin-plate spline model using nonrigid deformation. (Third row) SSC and proposed algorithm. (Fourth row) Manual segmentation (ground truth). Note that the SSC results are closer to the ground truth. Areas marked by circles indicate differences where the other techniques failed, likely due to breathing artifacts. Each result was subsequently further deformed and refined. Abbreviations: CT, computed tomography; SSC, sparse shape composition. Figure adapted from Reference with permission from Elsevier.
Figure 8
Figure 8
Results with varying levels of dropout. (a) Three dynamic contrast–enhanced magnetic resonance images at one slice level. (b) Manual ground truth tissue class segmentation (dark purple, background; blue, liver parenchyma; green, tumor; yellow, necrosis) and (c) deep neural network segmentation results for different levels of dropout (0.0, 0.1, 0.3), with the best correspondence to ground truth when dropout equals 0.1.

Similar articles

Cited by

References

    1. Zhang Z, Xu Y, Yang J, Li X, Zhang D. 2015. A survey of sparse representation: algorithms and applications. IEEE Access 3:490–530
    1. Li S, Yin H, Fang L. 2012. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans. Biomed. Eng 59:3450–59 - PubMed
    1. Ma L, Moisan L, Yu J, Zeng T. 2013. A dictionary learning approach for Poisson image deblurring. IEEE Trans. Med. Imaging 32:1277–89 - PubMed
    1. Nayak N, Chang H, Borowsky A, Spellman P, Parvin B. 2013. Classification of tumor histopathology via sparse feature learning. In Proceedings of the 10th IEEE International Symposium on Biomedical Imaging, pp. 410–13. Piscataway, NJ: IEEE - PMC - PubMed
    1. Onofrey JA, Oksuz I, Sarkar S, Venkataraman R, Staib LH, Papademetris X. 2016. MRI-TRUS image synthesis with application to image-guided prostate intervention. In Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, pp. 157–66. Berlin: Springer

Publication types

LinkOut - more resources