Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb;89(2):678-693.
doi: 10.1002/mrm.29485. Epub 2022 Oct 18.

Deep, deep learning with BART

Affiliations

Deep, deep learning with BART

Moritz Blumenthal et al. Magn Reson Med. 2023 Feb.

Abstract

Purpose: To develop a deep-learning-based image reconstruction framework for reproducible research in MRI.

Methods: The BART toolbox offers a rich set of implementations of calibration and reconstruction algorithms for parallel imaging and compressed sensing. In this work, BART was extended by a nonlinear operator framework that provides automatic differentiation to allow computation of gradients. Existing MRI-specific operators of BART, such as the nonuniform fast Fourier transform, are directly integrated into this framework and are complemented by common building blocks used in neural networks. To evaluate the use of the framework for advanced deep-learning-based reconstruction, two state-of-the-art unrolled reconstruction networks, namely the Variational Network and MoDL, were implemented.

Results: State-of-the-art deep image-reconstruction networks can be constructed and trained using BART's gradient-based optimization algorithms. The BART implementation achieves a similar performance in terms of training time and reconstruction quality compared to the original implementations based on TensorFlow.

Conclusion: By integrating nonlinear operators and neural networks into BART, we provide a general framework for deep-learning-based reconstruction in MRI.

Keywords: MRI; automatic differentiation; deep learning; image reconstruction; inverse problems; parallel imaging.

PubMed Disclaimer

Conflict of interest statement

Conflict of Interest

The authors declare no competing interests.

Figures

Figure 1:
Figure 1:
Integration of deep learning modules into BART. The numerical backend (red) is accessed by md-functions which invoke BART’s internal generically-optimized functions or external libraries offering highly optimized code for special functions. Differentiable neural networks are implemented as non-linear operators (blue). The nn-library (green) extends the non-linear operator framework by deep learning specific features. The training algorithms are integrated in BART’s iterative framework (violet). Iter6 provides a new interface for batched gradient-based training algorithms.
Figure 2:
Figure 2:
Basic concepts of nlops. A) An atomic nlop exemplary with two complex-valued inputs (x1, x2) and two outputs (y1=F1(x1,x2),y2=F2(x1,x2)) consisting of the forward operator F and its derivatives DiFo modeled by linops. F and DiFo communicate via a shared data structure. B) Chaining of two nlops F and G. Since G is applied on the output F(x), its derivative DG|F(x) is automatically evaluated at F(x). C) The two nlops F and G are combined to form H, whose output 1 is linked into input 1 to form I, whose inputs 0 and 1 are duplicated to construct J(x1,x2)=F(x1,G(x1,x2)). The derivatives of the final operator are constructed automatically (not shown).
Figure 3:
Figure 3:
Schematic description of operators, linops and nlops as data structures in BART. Solid lines mean “points to”, dotted lines “points to indirectly” and dashed lines “calls”. Colons indicate specific realizations of a data structure, i.e. operator chain s is the operator data s structure used for chaining operators. Objects required to create the respective structures are marked in red. Other structures and references are created automatically. A) An operator holds a reference to a data structure and a function which is called when the operator is applied. B) A linop holds references to multiple operators such as the forward and adjoint operator which share a common data structure. C) An nlop holds references to the non-linear forward operator and linops modeling the derivatives. The forward operator and linops have access to a shared data structure nlop_data_s. D) The data structure of a chain-operator holds references to the chained operators which are applied sequentially, when the chain-operator is applied.
Figure 4:
Figure 4:
Comparison of the TensorFlow and BART implementation of VarNet (A) and MoDL (B). For reference, we also show the results of the adjoint reconstruction AHy and an 𝓁1-Wavelet regularized SENSE reconstruction computed using the BART pics tool. Boxplots are based on PSNR and SSIM of the respective evaluation datasets using the coil sensitivities as foreground mask. This mask explains the discrepancy to the SSIM values given at the reconstructed images.
Figure 5:
Figure 5:
Comparison of two example reconstructions with MoDL and VarNet using one set of coil sensitivity maps (usual SENSE) and two sets of coil sensitivity maps (soft-SENSE). The aliased k-space data is simulated by first zero-padding the fully-sampled coil-images and afterwards sub-sampling the k-space by a factor of two before applying the usual sampling pattern (every fourth line and 28 auto calibration lines). The usage of two sets of coil sensitivity maps reduce undersampling artifacts (c.f. arrows) and improves the PSNR and SSIM for VarNet and MoDL.
Figure 6:
Figure 6:
Comparison of MoDL and VarNet for non-Cartesian reconstructions using a radial trajectory with 44 spokes. The fully-sampled k-space data from the reference knee image in Figure 4 was interpolated on the trajectory to simulate the non-Cartesian k-space data. For reference, we show the results of the adjoint reconstruction AHy with density compensation, a CG-SENSE and 𝓁1-Wavelet regularized reconstruction computed using the BART pics tool.
Figure 7:
Figure 7:
Comparison of training (left) and inference (right) time for MoDL and VarNet on different GPUs (full names in text). We observed slow host to device copies on the TITAN Xp which might affect the TensorFlow result of MoDL on this GPU. In general, the BART and TensorFlow implementations provide similar performance.
Figure 8:
Figure 8:
Brain images reconstructed from 60 radial k-space spokes via a coil-combined inverse nuFFT, an 𝓁1-Wavelet regularized PICS reconstructions, and a PICS reconstruction using a learned log-likelihood prior (left to right).

References

    1. Hammernik K, Klatzer T, Kobler E, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 2017;79(6):3055–3071. - PMC - PubMed
    1. Aggarwal HK, Mani MP, Jacob M. MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE Transactions on Medical Imaging. 2019;38(2):394–405. - PMC - PubMed
    1. Sodickson DK, Manning WJ Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magn. Reson. Med. 1997;38(4):591–603. - PubMed
    1. Griswold MA, Jakob PM, Heidemann RM, et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med. 2002;47(6):1202–1210. - PubMed
    1. Lustig M, Pauly JM SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space. Magn. Reson. Med. 2010;64(2):457–471. - PMC - PubMed

Publication types