Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct:73:102149.
doi: 10.1016/j.media.2021.102149. Epub 2021 Jun 29.

Model-based multi-parameter mapping

Affiliations

Model-based multi-parameter mapping

Yaël Balbastre et al. Med Image Anal. 2021 Oct.

Abstract

Quantitative MR imaging is increasingly favoured for its richer information content and standardised measures. However, computing quantitative parameter maps, such as those encoding longitudinal relaxation rate (R1), apparent transverse relaxation rate (R2*) or magnetisation-transfer saturation (MTsat), involves inverting a highly non-linear function. Many methods for deriving parameter maps assume perfect measurements and do not consider how noise is propagated through the estimation procedure, resulting in needlessly noisy maps. Instead, we propose a probabilistic generative (forward) model of the entire dataset, which is formulated and inverted to jointly recover (log) parameter maps with a well-defined probabilistic interpretation (e.g., maximum likelihood or maximum a posteriori). The second order optimisation we propose for model fitting achieves rapid and stable convergence thanks to a novel approximate Hessian. We demonstrate the utility of our flexible framework in the context of recovering more accurate maps from data acquired using the popular multi-parameter mapping protocol. We also show how to incorporate a joint total variation prior to further decrease the noise in the maps, noting that the probabilistic formulation allows the uncertainty on the recovered parameter maps to be estimated. Our implementation uses a PyTorch backend and benefits from GPU acceleration. It is available at https://github.com/balbasty/nitorch.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

None
Graphical abstract
Fig. 1
Fig. 1
Qualitative analysis of two toy problems related to MR relaxometry. The first toy problem (first row) has an objective function of the form (xexp(y))2, while the other toy problem (second row) as one of the form (xexp(exp(y)))2. In each case, the first column shows the objective function in black along with quadratic approximations corresponding to Gauss-Newton (red) and the proposed second order method (blue). The second column shows the magnitude of the gradient-hessian ratio. The black line corresponds to a purely quadratic objective function, whose optimum is found in one step (|g(x)|/h(x)=|xx*|). When the ratio is under this black line, monotonic convergence is ensured. The third column shows convergence of the Gauss-Newton (red) and proposed (blue) iterative schemes when attacking from either a convex or non-convex lobe, with the optimum shown in dashed black. In each case, convergence is shown both in terms of objective value per iteration and parameter value per iteration. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 2
Fig. 2
Qualitative analysis of two-dimensional toy problems related to MR relaxometry. The first toy problem (top half) has an objective function of the form (y,z)=(x0ezt0y)2+(x1ezt1y)2, while the other toy problem (bottom half) has one of the form (y,z)=(x0ezt0ey)2+(x1ezt1ey)2. In each case, the first column shows the objective function along each of the two dimensions (in black) along with quadratic approximations corresponding to Gauss-Newton (red) and the proposed second order methods (blue and green). The second column shows the normalised step size for each method, color-coded such that values below one are blue and values above one are red. The optimum is marked by a red star. Convergence paths from two random initial coordinates (solid and dashed curves) are overlaid. In the second toy problem, two different loading matrices are used: Proposed uses the diagonal of the absolute Hessian diag(|H|) whereas Proposed+ uses its sum across columns |H|1. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 3
Fig. 3
Negative log-likelihood and step size function of the nonlinear SPGR model. The first column shows the log-likelihood of a single observation with respect to each log-parameter, while keeping the others fixed. The second column shows the step size function (red) obtained with the proposed approximate Hessian, compared with the theoretical perfect step size function f(x)=|xx*|. As long as the actual step size is below this theoretical function, monotonic convergence is ensured. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 4
Fig. 4
Optimisation of 1000 simulated voxels for 10,000 iterations using the proposed second order method (left), Gauss-Newton (middle) and Levenberg-Marquardt (right). The top graph shows the negative log-likelihood per iteration for 50 random samples, while the middle graph shows the normalised gain per iteration for these same 50 samples. Gain is only shown for the proposed method. Note that if an iteration was non-monotonic, its gain would be negative, which never happens in our simulations. The bottom graph shows the distribution of the final log-likelihood of all 1000 samples. The dashed line represents the expected log-likelihood of the true parameters. Log-likelihood and iterations are shown in log scale.
Fig. 5
Fig. 5
Optimisation of 1000 simulated voxels for 10,000 iterations using the proposed second order method. The optimisation is either performed in the basis of the log-parameters (left), the rate representation (middle) or the time representation (right). The top graph shows the negative log-likelihood per iteration for 50 random samples, while the middle graph shows the normalised gain per iteration for these same 50 samples. The bottom graph shows the distribution of the final log-likelihood of all 1000 samples. The dashed line represents the expected log-likelihood of the true parameters. Log-likelihood and iterations are shown in log scale.
Fig. 6
Fig. 6
Top: R1, R2* and MTsat maps obtained with all methods. Bottom: z-normalised mean squared error between (unseen) predicted and acquired gradient-echo images. MSE was computed across all voxels in a single volume. Lower is better.
Fig. 7
Fig. 7
Decimation of the number of echoes. Maps were computed using the first 2, 4 or 6 echoes. Graphs on the left display the mean squared error (lower is better) relative to the reference maps (SPGR-ML with four repeats of 8 echoes) as a function of the number of echoes used. The corresponding parameter maps are displayed on the right.
Fig. 8
Fig. 8
Uncertainty mapping and propagation. Log parameter maps (E[logr1]=r˜1) and their uncertainty (sd[logr1]) were estimated using SPGR+JTV, which includes an edge-preserving prior and SPGR-ML, which has no (or infinitely uninformative) prior. The expected value and standard deviation of r1 and t1 under a Laplace posterior (E[r1]) or a peaked posterior (exp(r˜1)) are shown.

References

    1. Ashburner J. A fast diffeomorphic image registration algorithm. Neuroimage. 2007;38(1):95–113. - PubMed
    1. Ashburner J., Brudfors M., Bronik K., Balbastre Y. An algorithm for learning shape and appearance models without annotations. Neuroimage. 2018;55:197–215. - PMC - PubMed
    1. Ashburner J., Ridgway G.R. Symmetric diffeomorphic modeling of longitudinal structural MRI. Front. Neurosci. 2013;6:197. - PMC - PubMed
    1. Bach F. Optimization with sparsity-inducing penalties. FNT Mach. Learn. 2011;4(1):1–106.
    1. Balbastre Y., Brudfors M., Azzarito M., Lambert C., Callaghan M.F., Ashburner J. In: Medical Image Computing and Computer Assisted Intervention MICCAI 2020. Martel A.L., Abolmaesumi P., Stoyanov D., Mateus D., Zuluaga M.A., Zhou S.K., Racoceanu D., Joskowicz L., editors. Springer International Publishing; Cham: 2020. Joint total variation ESTATICS for robust multi-parameter mapping; pp. 53–63.

Publication types