Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul:71:102079.
doi: 10.1016/j.media.2021.102079. Epub 2021 Apr 16.

Imitation learning for improved 3D PET/MR attenuation correction

Affiliations

Imitation learning for improved 3D PET/MR attenuation correction

Kerstin Kläser et al. Med Image Anal. 2021 Jul.

Abstract

The assessment of the quality of synthesised/pseudo Computed Tomography (pCT) images is commonly measured by an intensity-wise similarity between the ground truth CT and the pCT. However, when using the pCT as an attenuation map (μ-map) for PET reconstruction in Positron Emission Tomography Magnetic Resonance Imaging (PET/MRI) minimising the error between pCT and CT neglects the main objective of predicting a pCT that when used as μ-map reconstructs a pseudo PET (pPET) which is as similar as possible to the gold standard CT-derived PET reconstruction. This observation motivated us to propose a novel multi-hypothesis deep learning framework explicitly aimed at PET reconstruction application. A convolutional neural network (CNN) synthesises pCTs by minimising a combination of the pixel-wise error between pCT and CT and a novel metric-loss that itself is defined by a CNN and aims to minimise consequent PET residuals. Training is performed on a database of twenty 3D MR/CT/PET brain image pairs. Quantitative results on a fully independent dataset of twenty-three 3D MR/CT/PET image pairs show that the network is able to synthesise more accurate pCTs. The Mean Absolute Error on the pCT (110.98 HU ± 19.22 HU) compared to a baseline CNN (172.12 HU ± 19.61 HU) and a multi-atlas propagation approach (153.40 HU ± 18.68 HU), and subsequently lead to a significant improvement in the PET reconstruction error (4.74% ± 1.52% compared to baseline 13.72% ± 2.48% and multi-atlas propagation 6.68% ± 2.06%).

Keywords: Convolutional neural network; Deep learning; Imitation learning; MR to CT synthesis.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

None
Graphical abstract
Fig. 1
Fig. 1
Top row: Ground truth CT, T1-weighted MRI, T2-weighted MRI. Bottom row: pseudo CT, absolute error between ground-truth and pseudo CT, and absolute error between PETs reconstructed using the ground-truth CT and pseudo CT for attenuation correction. Small and very localised difference in the CT lead to large errors in the PET image. We argue that algorithms should be optimising for PET residuals and not only for CT residuals when used for PET attenuation correction.
Fig. 2
Fig. 2
Yellow solid box: semantic regression. A first CNN (Net1) with MR images as inputs predicts multiple valid pCT realisations by minimising a combination of the L2-loss between true CT and pCT (L2-loss CT) and a learned metric loss (L2-loss IL). In the first training stage only L2-loss CT is considered and L2-loss IL is weighted to zero. Purple dashed box: imitation network. A second CNN (Net2) with pCTs and corresponding CTs as input predicts the residuals between PET reconstructed with true CT-derived μ-map and pPET reconstructed with pCT as μ-map by minimising L2-loss PET. The training of semantic regression and imitation network is performed in three separate stages. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 3
Fig. 3
PET simulation: a PET forward projection is applied on the μ-map transformed CT to obtain attenuation factor sinograms. Similar forward projection is applied to the original PET to obtain simulated emission sinograms. Final pPETs are reconstructed from simulated emission sinograms using pCT derived attenuation maps.
Fig. 4
Fig. 4
PET values (first column), variance (middle column) and Z-score (right column) of ground truth PET (top row) compared to pPET values reconstructed with pCTs from Monte Carlo (MC) dropout sampling (middle row) and pCTs from multi-hypothesis sampling (bottom row). The multi-hypothesis model captures true PET values better than MC dropout method.
Fig. 5
Fig. 5
Qualitative results. From top to bottom: Ground-truth, baseline (HighRes3DNet) and imitation learning alongside input MR images. From left to right: CT, pCT-CT residuals, PET, pPET-PET residuals. The error in the pCT generated with the proposed imitation learning is lower than the baseline pCT residuals. The error in the pPET reconstructed with the proposed method is significantly lower than the pPET error for the baseline method.
Fig. 6
Fig. 6
From left to right: the acquired T1-, T2-weighted MRI, CT, and ground truth 18F-FDG PET, the pCT and pPET generated with the baseline (HighRes3DNet only), the pCT and pPET generated with the multi-atlas propagation, and the pCT, and pPET generated with the proposed imitation learning for the subjects within the independent validation dataset that obtained the lowest (top row), average (middle row), and highest (bottom row) MAPE in the pPET, which was consistent among all methods.
Fig. 7
Fig. 7
Groupwise average over 23 subjects (top) and standard deviation (bottom) of the pCT absolute residuals (in HU) of baseline, multi-atlas propagation and imitation learning (column 1–3) and pPET absolute residuals (in arbitrary unit (a.u.)) between gold-standard PET and pPETs reconstructed with baseline pCT, multi-atlas propagation pCT and imitation learning pCT (column 4–6).

References

    1. Arabi H., Bortolin K., Ginovart N., Garibotto V., Zaidi H. Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies. Hum. Brain Mapp. 2020;41(13):3667–3679. - PMC - PubMed
    1. Arabi H., Zaidi H. Deep learning-guided estimation of attenuation correction factors from time-of-flight PET emission data. Med. Image Anal. 2020;64:101718. - PubMed
    1. Ben-Cohen A., Klang E., Raskin S.P., Soffer S., Ben-Haim S., Konen E., Amitai M.M., Greenspan H. Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Eng. Appl. Artif. Intell. 2019;78:186–194.
    1. Berker Y., Franke J., Salomon A., Palmowski M., Donker H.C., Temur Y., Mottaghy F.M., Kuhl C., Izquierdo-Garcia D., Fayad Z.A. MRI-based attenuation correction for hybrid PET/MRI systems: a 4-class tissue segmentation technique using a combined ultrashort-echo-time/Dixon MRI sequence. J. Nucl. Med. 2012;53(5):796–804. - PubMed
    1. Bi L., Kim J., Kumar A., Feng D., Fulham M. Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment. Springer; 2017. Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs) pp. 43–51.

Publication types