Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Jun;79(6):3055-3071.
doi: 10.1002/mrm.26977. Epub 2017 Nov 8.

Learning a variational network for reconstruction of accelerated MRI data

Affiliations

Learning a variational network for reconstruction of accelerated MRI data

Kerstin Hammernik et al. Magn Reson Med. 2018 Jun.

Abstract

Purpose: To allow fast and high-quality reconstruction of clinical accelerated multi-coil MR data by learning a variational network that combines the mathematical structure of variational models with deep learning.

Theory and methods: Generalized compressed sensing reconstruction formulated as a variational model is embedded in an unrolled gradient descent scheme. All parameters of this formulation, including the prior model defined by filter kernels and activation functions as well as the data term weights, are learned during an offline training procedure. The learned model can then be applied online to previously unseen data.

Results: The variational network approach is evaluated on a clinical knee imaging protocol for different acceleration factors and sampling patterns using retrospectively and prospectively undersampled data. The variational network reconstructions outperform standard reconstruction algorithms, verified by quantitative error measures and a clinical reader study for regular sampling and acceleration factor 4.

Conclusion: Variational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, that is, reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow. Magn Reson Med 79:3055-3071, 2018. © 2017 International Society for Magnetic Resonance in Medicine.

Keywords: accelerated MRI; compressed sensing; deep learning; image reconstruction; parallel imaging; variational network.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Structure of the variational network (VN). The VN consists of T gradient descent steps. To obtain a reconstruction, we feed the undersampled k-space data, coil sensitivity maps and the zero filling solution to the VN. Here, a sample gradient step is depicted in detail. As we are dealing with complex-valued images, we learn separate filters kit for the real and complex plane. The non-linear activation function ϕitʹ combines the filter responses of these two feature planes. During a training procedure, the filter kernels, activation functions and data term weights λt are learned.
Figure 2
Figure 2
Variational network training procedure: We aim at learning a set of parameters θ of the VN during an offline training procedure. For this purpose, we compare the current reconstruction of the VN to an artifact-free reference using a similarity measure. This gives us the reconstruction error which is propagated back to the VN to compute a new set of parameters.
Figure 3
Figure 3
Coronal PD-weighted scan with acceleration R = 4 of a 32-year-old male. The green bracket indicates osteoarthritis. The first and third row depict reconstruction results for regular Cartesian sampling, the second and fourth row depict the same for variable-density random sampling. Zoomed views show that the learned VN reconstruction appears slightly sharper than the PI-CS TGV and dictionary learning reconstruction. The dictionary learning and VN reconstruction can significantly suppress artifacts unlike CG SENSE and PI-CS TGV. Results based on random sampling show reduced residual artifacts and slightly increased sharpness in comparison to regular sampling.
Figure 4
Figure 4
Difference images to reference image for the reconstructed coronal PD-weighted scans with acceleration R = 4 presented in Figure 3. The undersampling artifacts can be clearly observed in the CG SENSE and zero filling results. While TGV has a remaining undersampling artifact for regular sampling, the dictionary learning method can suppress this artifact. However, we observe larger errors at object boundaries in the dictionary learning results. The VN result has the least error compared to the reference methods.
Figure 5
Figure 5
Coronal fat-saturated PD-weighted scan with acceleration R = 4 of a 57-year-old female. The green bracket indicates broad-based, full-thickness chondral loss and a subchondral cystic change. The green arrow depicts an extruded and torn medial meniscus. The first and second row depict reconstruction results for regular Cartesian sampling, the third and fourth row depict the same for variable-density random sampling. The zoomed views show that the learned VN reconstruction appears sharper than the PI-CS TGV and dictionary learning reconstruction. The VN reconstruction shows reduced artifacts compared to the other methods. Results based on random sampling show reduced residual artifacts and appear sharper than the results based on regular sampling.
Figure 6
Figure 6
Difference images to reference image for the reconstructed coronal fat-saturated PD-weighted scans with acceleration R = 4 presented in Figure 5. The undersampling artifacts can be clearly observed in the CG SENSE and zero filling results. Both PI-CS TGV and dictionary learning have residual undersampling artifact for regular sampling. We observe larger errors at object boundaries in the dictionary learning results. The VN result has the least error compared to the reference methods and is able to suppress the undersampling artifacts.
Figure 7
Figure 7
Reconstruction results for sagittal fat-saturated T2-weighted, sagittal PD-weighted and axial fat-saturated T2-weighted sequences of a complete knee protocol for acceleration factor R = 4 with regular undersampling. Each sequence here is illustrated with results from a different patient, identified by gender and age (e.g., M50 indicates a 50-year-old male). Pathological cases and a pediatric case are shown for both male and female patients of various ages. Green arrows and brackets indicate pathologies. Yellow arrows show residual artifacts that are visible in the different reconstructions, but not in the learned VN reconstructions.
Figure 8
Figure 8
Reconstruction results of prospectively undersampled data for regular sampling R = 4. We show reconstruction results for dictionary learning, PI-CS TGV and our VN for a whole knee protocol of a 27-year old female volunteer. We observe a similar behavior as for the retrospective undersampled data. Dictionary learning and PI-CS TGV perform reasonably well for non-fat-saturated scans. While the fat-saturated scans appear artificial with a PI-CS TGV reconstruction, we observe a noise pattern in the dictionary learning results, most prominent in the sagittal fat-saturated T2-weighted scan. Dictionary learning appears slightly blurrier, which is best seen in the axial slice. The VN reconstructions have less undersampling artifacts and an improved SNR.
Figure 9
Figure 9
Examples of learned parameters of the VN. Filter kernels for the real kRE and imaginary kIM plane as well as their corresponding activation ϕ′ and potential function ϕ are shown. The potential function ϕ was obtained by integrating the activation function ϕ′ including an additional integration constant.

References

    1. LeCun Y, Bengio Y, Hinton G. Deep Learning. Nature. 2015;521(7553):436–444. - PubMed
    1. Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press; 2016.
    1. Krizhevsky A, Sutskever I, Geoffrey EH. ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems (NIPS) 2012:1097–1105.
    1. Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. International Conference on Learning Representations. 2015:1–14.
    1. Dosovitskiy A, Fischer P, Ilg E, Häusser P, Hazirbas C, Golkov V, van der Smagt P, Cremers D, Brox T. FlowNet: Learning Optical Flow with Convolutional Networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2015:2758–2766.

Publication types