Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May:2023:1372.

Can we predict motion artifacts in clinical MRI before the scan completes?

Affiliations

Can we predict motion artifacts in clinical MRI before the scan completes?

Malte Hoffmann et al. Proc Int Soc Magn Reson Med Sci Meet Exhib Int Soc Magn Reson Med Sci Meet Exhib. 2023 May.

Abstract

Subject motion can cause artifacts in clinical MRI, frequently necessitating repeat scans. We propose to alleviate this inefficiency by predicting artifact scores from partial multi-shot multi-slice acquisitions, which may guide the operator in aborting corrupted scans early.

Keywords: AI-guided radiology; brain; deep learning; image quality; neuroimaging.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Training strategy. The k-space sampling model simulates motion in the first few shots of a k-space slice. Under-sampled channels and their inverse Fourier Transform (IFT) are input to the artifact-rating model, which uses convolutions (Figure 3), pooling, and dense layers to predict a ReLU-activated artifact score. We activate all other trainable layers with LeakyReLU (parameter 0.2). We minimize the mean squared error (MSE) from the ground truth s, obtained for the magnitude image reconstructed from all shots, using a prior model trained by the original authors (IQD).
Figure 2.
Figure 2.
Training data simulation using clinical raw k-space data. (A) Examples of simulated head positions and corresponding ky sampling in each shot of an accelerated 6-shot acquisition. We reconstruct an image by combining shots and using the vendor’s SDK. (B) For each shot, we create moved multi-channel images, transform (FT) to k-space, add noise, and keep only the ky lines for that shot. (C) Image examples with corresponding ground-truth artifact scores for motion injected in the first 3 shots. (D) Resulting distribution of artifact scores across the simulated training set.
Figure 3.
Figure 3.
Interlacer-type layer used within the convolutional encoder of the artifact-rating model (Figure 1). The layer separately convolves image and k-space features and mixes these features via channel-wise addition after the appropriate forward or backward Fourier Transform (FT). This enables the simultaneous extraction of information from neighboring voxels in image space and similar spatial frequencies in k-space despite the different intensity scales of both spaces. We apply operations other than the FT on concatenated real and imaginary channels.
Figure 4.
Figure 4.
Artifact prediction accuracy. A score s=0 represents perfect image quality. Left: simulated data for subjects held-out during training (MSE 0.13). Center: volunteer data with the first 3 shots substituted across 5 back-to-back scans in different head positions (MSE 0.17). Right: deviation from ground truth for capacity-matched models operating in k-space, image space, or both spaces - using Interlacer-type convolutions (Figure 3) - for the acquired data of the central panel. Each panel shows scores for 1475 slices. Black lines represent median scores.
Figure 5.
Figure 5.
(A) Validation data generation mixing shots from volunteer scans in different head positions. (B) Representative examples of mixed-validation images along with artifact scores. We show ground-truth scores (GT) obtained from the IQD model, trained with expert ratings by the original authors. This model predicts a score in the interval [0, 3] for a magnitude image reconstructed from the full acquisition, where higher scores mean more artifact. In contrast, our model predicts scores from the first 3/6 shots, that is, after the scan is only half complete.

References

    1. Hennig J et al. RARE imaging: a fast imaging method for clinical MR. Magn Reson Med. 1986;3(6):823–833. - PubMed
    1. Pipe JG. Motion correction with PROPELLER MRI: application to head motion and free-breathing cardiac imaging. Magn Reson Med. 1999;42(5):963–9. - PubMed
    1. Norbeck O et al. T1-FLAIR imaging during continuous head motion: Combining PROPELLER with an intelligent marker. Magn Reson Med. 2021;85(2):868–82. - PubMed
    1. Sujit SJ et al. Automated image quality evaluation of structural brain MRI using an ensemble of deep learning networks. J Magn Reson Imaging. 2019;50(4):1260–7. - PMC - PubMed
    1. Sreekumari A et al. A Deep Learning–Based Approach to Reduce Rescan and Recall Rates in Clinical MRI Examinations. AJNR Am J Neuroradiol. 2019;40(2):217–23. - PMC - PubMed

LinkOut - more resources