Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 29:10:e1849.
doi: 10.7717/peerj-cs.1849. eCollection 2024.

Proj2Proj: self-supervised low-dose CT reconstruction

Affiliations

Proj2Proj: self-supervised low-dose CT reconstruction

Mehmet Ozan Unal et al. PeerJ Comput Sci. .

Abstract

In Computed Tomography (CT) imaging, one of the most serious concerns has always been ionizing radiation. Several approaches have been proposed to reduce the dose level without compromising the image quality. With the emergence of deep learning, thanks to the increasing availability of computational power and huge datasets, data-driven methods have recently received a lot of attention. Deep learning based methods have also been applied in various ways to address the low-dose CT reconstruction problem. However, the success of these methods largely depends on the availability of labeled data. On the other hand, recent studies showed that training can be done successfully without the need for labeled datasets. In this study, a training scheme was defined to use low-dose projections as their own training targets. The self-supervision principle was applied in the projection domain. The parameters of a denoiser neural network were optimized through self-supervised training. It was shown that our method outperformed both traditional and compressed sensing-based iterative methods, and deep learning based unsupervised methods, in the reconstruction of analytic CT phantoms and human CT images in low-dose CT imaging. Our method's reconstruction quality is also comparable to a well-known supervised method.

Keywords: Deep learning; Image reconstruction; Low-dose CT; Self-supervised learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Figure 1
Figure 1. Proposed working schema for self-supervised low-dose CT reconstruction.
Image source credit: Yang et al. (2018).
Figure 2
Figure 2. Shepp–Logan phantom reconstruction results from 64-view projections with 37 dB noise level: (A) ground truth, (B) FBP, (C) SART, (D) SART+TV, (E) SART+BM3D ( σ=0.35), (F) SART+BM3D ( σ=0.20), (G) DIP+TV, (H) FBP+U-Net, (I) Proj2Proj trained on ellipses dataset.
Figure 3
Figure 3. Ellipses image reconstruction results from 64-view projections with 33 dB noise level: (A) ground truth, (B) FBP, (C) SART, (D) SART+TV, (E) SART+BM3D ( σ=0.35), (F) SART+BM3D ( σ=0.20), (G) DIP+TV, (H) FBP+U-Net, (I) Proj2Proj trained on ellipses dataset.
Image source credit: Ellipses dataset, https://github.com/jleuschn/dival/tree/master/dival/datasets.
Figure 4
Figure 4. Human CT image reconstruction results from 64-view with 33 dB SNR noise level: (A) ground truth, (B) FBP, (C) SART, (D) SART+TV, (E) SART+BM3D ( σ=0.35), (F) SART+BM3D ( σ=0.20), (G) DIP+TV, (H) FBP+U-Net, (I) Proj2Proj trained on human CT dataset.
Image source credit: Yang et al. (2018).
Figure 5
Figure 5. Human CT image reconstruction results from 64-view 37 dB SNR noise level: (A) ground truth, (B) FBP, (C) SART, (D) SART+TV, (E) SART+BM3D ( σ=0.35), (F) SART+BM3D ( σ=0.20), (G) DIP+TV, (H) FBP+U-Net, (I) Proj2Proj trained on human CT dataset.
Image source credit: Yang et al. (2018).
Figure 6
Figure 6. The 1-D profiles of the reconstructions from left to right: ground truth, SART+BM3D ( σ=0.20), DIP+TV, FBP+U-Net, proposed Proj2Proj method.
Image source credit: Yang et al. (2018).
Figure 7
Figure 7. The 1-D profiles of the reconstructions from left to right: ground truth, SART+BM3D ( σ=0.20), DIP+TV, FBP+U-Net, proposed Proj2Proj method.
Image source credit: Yang et al. (2018).

References

    1. Adler J, Öktem O. Learned primal-dual reconstruction. IEEE Transactions on Medical Imaging. 2018;37(6):1322–1332. doi: 10.1109/TMI.2018.2799231. - DOI - PubMed
    1. Andersen A, Kak A. Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm. Ultrasonic Imaging. 1984;6(1):81–94. doi: 10.1177/016173468400600107. - DOI - PubMed
    1. Anirudh R, Kim H, Thiagarajan JJ, Mohan KA, Champley K, Bremer T. Lose the views: limited angle ct reconstruction via implicit sinogram completion. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; Piscataway: IEEE; 2018. pp. 6343–6352.
    1. Baguer DO, Leuschner J, Schmidt M. Computed tomography reconstruction using deep image prior and learned reconstruction methods. Inverse Problems. 2020;36(9):094004. doi: 10.1088/1361-6420/aba415. - DOI
    1. Batson J, Royer L. Noise2Self: blind denoising by self-supervision. Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019; Long Beach, California, USA: Proceedings of Machine Learning Research; 2019. pp. 524–533.

LinkOut - more resources