Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020:8:83858-83870.
doi: 10.1109/access.2020.2992204. Epub 2020 May 13.

MRI restoration using edge-guided adversarial learning

Affiliations

MRI restoration using edge-guided adversarial learning

Yaqiong Chai et al. IEEE Access. 2020.

Abstract

Magnetic resonance imaging (MRI) images acquired as multislice two-dimensional (2D) images present challenges when reformatted in orthogonal planes due to sparser sampling in the through-plane direction. Restoring the "missing" through-plane slices, or regions of an MRI image damaged by acquisition artifacts can be modeled as an image imputation task. In this work, we consider the damaged image data or missing through-plane slices as image masks and proposed an edge-guided generative adversarial network to restore brain MRI images. Inspired by the procedure of image inpainting, our proposed method decouples image repair into two stages: edge connection and contrast completion, both of which used general adversarial networks (GAN). We trained and tested on a dataset from the Human Connectome Project to test the application of our method for thick slice imputation, while we tested the artifact correction on clinical data and simulated datasets. Our Edge-Guided GAN had superior PSNR, SSIM, conspicuity and signal texture compared to traditional imputation tools, the Context Encoder and the Densely Connected Super Resolution Network with GAN (DCSRN-GAN). The proposed network may improve utilization of clinical 2D scans for 3D atlas generation and big-data comparative studies of brain morphometry.

Keywords: artifact correction; edge; generative adversarial network; image restoration; imputation; magnetic resonance imaging.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
A stack of MRI 2D images. (a) The resolution in the through-plane direction (coronal and sagittal) is usually much lower than that in the in-plane (axial) direction. (b) Initially the slices to be restored are modeled as 1-valued masks in the through plane direction, while shown as masked rows in the other two planes. Note that the masked region shown in coronal and sagittal view in (b) is one slice out of every three slices, different than the mask size that we implemented in this paper (see Section III.A). It is a schematic illustration of how missing slices are represented in the other two orthogonal planes.
Fig. 2.
Fig. 2.
Framework of the proposed method. The disconnected edges and their corresponding mask patterns are used to train the edge generator. The edges extracted from original image are used as edge references. The contrast generator trained by paired masked images and ground truth uses the completed edges generated from the edge generator as constraints. Note that ground truth image feeds to the edge discriminator as prior information, while to the contrast discriminator to be differentiated from the recovered image.
Fig. 3.
Fig. 3.
Individual architecture of edge connection and contrast completion networks. The hyperparameters of each layer are labelled as K (kernel size), N (number of channels), and S (stride). Both of the generators are built from ResNet with different normalization strategies. While spectral normalization is applied after each layer in the edge generator to enhance the stability, several layers in the contrast generator are designed without spectral normalization to speed up the training procedure. Note that the discriminator for both networks follows the same hyperparameters.
Fig. 4.
Fig. 4.
MRI through-plane imputation results: representative coronal and sagittal slices of one subject from the HCP dataset. The original T1-weighted image (reference) is down-sampled in the through plane, and missing axial slices are restored by different methods: nearest neighbor, cubic, context encoder, and EG-GAN. Our method provides more visually plausible results which recover more brain anatomy without either stairway effects or broken white matter tracts. Comparing to context encoder, EG-GAN can mitigate the strip-shape artifact caused by masked model.
Fig. 5.
Fig. 5.
MRI super-resolution results: representative coronal and sagittal slices of one subject from the HCP dataset. Low resolution coronal and sagittal planes are generated by down-sampling the original T1-weighted scan (reference) in pseudo 3D k-space, and 3D high resolution scans are reconstructed from multiple 2D axial slices by different methods: linear, DCSRN-GAN, context encoder, and EG-GAN. Our approach reconstructs images with more anatomically plausible details and more distinct edges.
Fig. 6.
Fig. 6.
Clinical axial (a) and sagittal (d) scans with zipper artifact and their corresponding zoomed-in regions, masks overlapped on the artifact corrupted rows (b, e), and artifact corrected images by EG-GAN (c, f) with the magnified areas, respectively.
Fig. 7.
Fig. 7.
Spike artifacts removal: representative coronal slices of two subjects from the HCP dataset. The artifacts are simulated by adding two different random spike gradients (a, d) in pseudo k-space. The corrected images and their corresponding references are shown in (b, e) and (c, f), respectively.

References

    1. Somasundaram K and Kalavathi P, “Analysis of Imaging Artifacts in MR Brain Images,” Orient. J Comput Sci Technol, vol. 5, no. (1), pp. 135–141, 2012.
    1. Stadler A and Ba-ssalamah A, “Artifacts in body MR imaging?: their appearance and how to eliminate them,” pp. 1242–1255, 2007. - PubMed
    1. Zhuo J and Gullapalli RP, “MR Artifacts, Safety, and Quality Control,” RadioGraphics, vol. 26, no. 1, pp. 275–297, 2006. - PubMed
    1. Welch EB, Felmlee JP, Ehman RL, and Manduca A, “Motion correction using the k-space phase difference of orthogonal acquisitions,” Magn. Reson. Med, vol. 48, no. 1, pp. 147–156, 2002. - PubMed
    1. Maclaren J, Herbst M, Speck O, and Zaitsev M, “Prospective motion correction in brain imaging: A review,” Magn. Reson. Med, vol. 69, no. 3, pp. 621–636, March. 2013. - PubMed

LinkOut - more resources