Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Dec;3(4):694-709.
doi: 10.1109/TCI.2017.2697206. Epub 2017 Apr 21.

Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

Affiliations

Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

Saiprasad Ravishankar et al. IEEE Trans Comput Imaging. 2017 Dec.

Abstract

The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

Keywords: Compressed sensing; Convergence analysis; Dictionary learning; Fast algorithms; Inverse problems; Sparsity.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
The SOUP-DILLO Algorithm (due to the 0 “norm”) for Problem (P1). Superscript t denotes the iterates in the algorithm. The vectors bt and ht above are computed efficiently via sparse operations.
Fig. 2
Fig. 2
The SOUP-DILLO and SOUP-DILLI image reconstruction algorithms for Problems (P3) and (P4), respectively. Superscript t denotes the iterates. Parameter L can be set very large in practice (e.g., L ∝ ║Az2).
Fig. 3
Fig. 3
Test data (magnitudes of the complex-valued MR data are displayed here). Image (a) is available at http://web.stanford.edu/class/ee369c/data/brain.mat. The images (b)-(e) are publicly available: (b) T2 weighted brain image [69], (c) water phantom [70], (d) cardiac image [71], and (e) T2 weighted brain image [72]. Image (f) is a reference sagittal brain slice provided by Prof. Michael Lustig, UC Berkeley. Image (g) is a complex-valued reference SENSE reconstruction of 32 channel fully-sampled Cartesian axial data from a standard spin-echo sequence. Images (a) and (f) are 512 × 512, while the rest are 256 × 256. The images (b) and (g) have been rotated clockwise by 90° here for display purposes. In the experiments, we use the actual orientations.
Fig. 4
Fig. 4
Comparison of dictionary learning approaches for adaptive sparse representation (NSRE and sparsity factors are expressed as percentages): (a) NSRE values for SOUP-DILLO at various λ along with those obtained by performing 0 block coordinate descent sparse coding (as in (P1)) using learned PADL [41], [55] dictionaries (denoted ‘Post-L0’ in the plot legend); (b) (net) sparsity factors for SOUP-DILLO at various λ along with those obtained by performing 0 block coordinate descent sparse coding using learned PADL [41], [55] dictionaries; (c) learning times for SOUP-DILLO and PADL; (d) learning times for SOUP-DILLO and OS-DL for various achieved (net) sparsity factors (in learning); (e) NSRE vs. (net) sparsity factors achieved within SOUP-DILLO and OS-DL; and (f) NSRE vs. (net) sparsity factors achieved within SOUP-DILLO along with those obtained by performing 0 (block coordinate descent) sparse coding using learned OS-DL dictionaries.
Fig. 5
Fig. 5
Behavior of SOUP-DILLO MRI (for (P3)) and SOUP-DILLI MRI (for (P4)) for Image (c) with Cartesian sampling and 2.5× undersampling: (a) sampling mask in k-space; (b) magnitude of initial reconstruction y0 (PSNR = 24.9 dB); (c) SOUP DILLO MRI (final) reconstruction magnitude (PSNR = 36.8 dB); (d) objective function values for SOUP-DILLO MRI and SOUP-DILLI MRI; (e) reconstruction PSNR over iterations; (f) changes between successive image iterates (║ytyt−12) normalized by the norm of the reference image (║yref2 = 122.2); (g) normalized changes between successive coefficient iterates (║CtCt−1F/║YrefF) where Yref is the patch matrix for the reference image; (h) normalized changes between successive dictionary iterates (DtDt1F/J) (i) initial real-valued dictionary in the algorithms; and (j) real and (k) imaginary parts of the learnt dictionary for SOUP-DILLO MRI. The dictionary columns or atoms are shown as 6 × 6 patches.
Fig. 6
Fig. 6
Results for Image (c) with Cartesian sampling and 2.5× undersampling. The sampling mask is shown in Fig. 5(a). Reconstructions (magnitudes): (a) DLMRI [13]; (b) PANO [75]; (c) 0 “norm”-based FDLCP [18]; and (d) SOUP-DILLO MRI (with zero-filling initialization). (e)-(h) are the reconstruction error maps for (a)-(d), respectively.

Similar articles

Cited by

References

    1. Elad M, Milanfar P, Rubinstein R. Analysis versus synthesis in signal priors. Inverse Problems. 2007;23(3):947–968.
    1. Candès EJ, Eldar YC, Needell D, Randall P. Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis. 2011;31(1):59–73.
    1. Pratt WK, Kane J, Andrews HC. Hadamard transform image coding. Proc IEEE. 1969;57(1):58–68.
    1. Ravishankar S, Bresler Y. Learning sparsifying transforms. IEEE Trans Signal Process. 2013;61(5):1072–1086. - PubMed
    1. Liu Y, Cai J-F, Zhan Z, Guo D, Ye J, Chen Z, Qu X. Balanced sparse model for tight frames in compressed sensing magnetic resonance imaging. PLOS ONE. 2015 Apr;10(4):1–19. [Online]. Available: http://dx.doi.org/10.1371%2Fjournal.pone.0119584. - PMC - PubMed

LinkOut - more resources