Evaluation of motion artefact reduction depending on the artefacts' directions in head MRI using conditional generative adversarial networks
- PMID: 37237139
- PMCID: PMC10220077
- DOI: 10.1038/s41598-023-35794-1
Evaluation of motion artefact reduction depending on the artefacts' directions in head MRI using conditional generative adversarial networks
Abstract
Motion artefacts caused by the patient's body movements affect magnetic resonance imaging (MRI) accuracy. This study aimed to compare and evaluate the accuracy of motion artefacts correction using a conditional generative adversarial network (CGAN) with an autoencoder and U-net models. The training dataset consisted of motion artefacts generated through simulations. Motion artefacts occur in the phase encoding direction, which is set to either the horizontal or vertical direction of the image. To create T2-weighted axial images with simulated motion artefacts, 5500 head images were used in each direction. Of these data, 90% were used for training, while the remainder were used for the evaluation of image quality. Moreover, the validation data used in the model training consisted of 10% of the training dataset. The training data were divided into horizontal and vertical directions of motion artefact appearance, and the effect of combining this data with the training dataset was verified. The resulting corrected images were evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR), and the metrics were compared with the images without motion artefacts. The best improvements in the SSIM and PSNR were observed in the consistent condition in the direction of the occurrence of motion artefacts in the training and evaluation datasets. However, SSIM > 0.9 and PSNR > 29 dB were accomplished for the learning model with both image directions. The latter model exhibited the highest robustness for actual patient motion in head MRI images. Moreover, the image quality of the corrected image with the CGAN was the closest to that of the original image, while the improvement rates for SSIM and PSNR were approximately 26% and 7.7%, respectively. The CGAN model demonstrated a high image reproducibility, and the most significant model was the consistent condition of the learning model and the direction of the appearance of motion artefacts.
© 2023. The Author(s).
Conflict of interest statement
The authors declare no competing interests.
Figures





Similar articles
-
MRI motion artifact reduction using a conditional diffusion probabilistic model (MAR-CDPM).Med Phys. 2024 Apr;51(4):2598-2610. doi: 10.1002/mp.16844. Epub 2023 Nov 27. Med Phys. 2024. PMID: 38009583
-
Reducing image artifacts in sparse projection CT using conditional generative adversarial networks.Sci Rep. 2024 Feb 16;14(1):3917. doi: 10.1038/s41598-024-54649-x. Sci Rep. 2024. PMID: 38365934 Free PMC article.
-
A Physics-Informed Deep Learning Model for MRI Brain Motion Correction.ArXiv [Preprint]. 2025 Feb 13:arXiv:2502.09296v1. ArXiv. 2025. PMID: 39990792 Free PMC article. Preprint.
-
CT artifact correction for sparse and truncated projection data using generative adversarial networks.Med Phys. 2021 Feb;48(2):615-626. doi: 10.1002/mp.14504. Epub 2020 Dec 30. Med Phys. 2021. PMID: 32996149 Review.
-
Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review.IEEE Trans Med Imaging. 2024 Feb;43(2):846-859. doi: 10.1109/TMI.2023.3323215. Epub 2024 Feb 2. IEEE Trans Med Imaging. 2024. PMID: 37831582 Review.
Cited by
-
Motion Artifact Reduction Using U-Net Model with Three-Dimensional Simulation-Based Datasets for Brain Magnetic Resonance Images.Bioengineering (Basel). 2024 Feb 27;11(3):227. doi: 10.3390/bioengineering11030227. Bioengineering (Basel). 2024. PMID: 38534500 Free PMC article.
-
Assessment of Subject Head Motion in Diffusion MRI.Proc SPIE Int Soc Opt Eng. 2024 Feb;12926:129261B. doi: 10.1117/12.3006633. Epub 2024 Apr 2. Proc SPIE Int Soc Opt Eng. 2024. PMID: 39220213 Free PMC article.
References
-
- Rajeswaran R. Fetal MR Examination Technique. MR Imaging of the fetus 2022. Springer; 2022. pp. 11–21.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Miscellaneous