Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Sep:100:102094.
doi: 10.1016/j.compmedimag.2022.102094. Epub 2022 Jul 26.

Virtual contrast enhancement for CT scans of abdomen and pelvis

Affiliations

Virtual contrast enhancement for CT scans of abdomen and pelvis

Jingya Liu et al. Comput Med Imaging Graph. 2022 Sep.

Abstract

Contrast agents are commonly used to highlight blood vessels, organs, and other structures in magnetic resonance imaging (MRI) and computed tomography (CT) scans. However, these agents may cause allergic reactions or nephrotoxicity, limiting their use in patients with kidney dysfunctions. In this paper, we propose a generative adversarial network (GAN) based framework to automatically synthesize contrast-enhanced CTs directly from the non-contrast CTs in the abdomen and pelvis region. The respiratory and peristaltic motion can affect the pixel-level mapping of contrast-enhanced learning, which makes this task more challenging than other body parts. A perceptual loss is introduced to compare high-level semantic differences of the enhancement areas between the virtual contrast-enhanced and actual contrast-enhanced CT images. Furthermore, to accurately synthesize the intensity details as well as remain texture structures of CT images, a dual-path training schema is proposed to learn the texture and structure features simultaneously. Experiment results on three contrast phases (i.e. arterial, portal, and delayed phase) show the potential to synthesize virtual contrast-enhanced CTs directly from non-contrast CTs of the abdomen and pelvis for clinical evaluation.

Keywords: Contrast enhanced CT; Deep learning; Generative adversarial network; Image synthesize.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Figure 1:
Figure 1:
Illustration of contrast-enhanced CT generation. (a) Actual contrast-enhanced CT scan with contrast agent (dye) injection in patients. (b) Virtual contrast-enhanced CT by our proposed framework generated directly from the non-contrast CT.
Figure 2:
Figure 2:
The proposed detail-aware dual-path framework synthesizes virtual contrast-enhanced CT images directly from non-contrast CT images. The global path takes three consecutive whole CT slices as input to extract global structure features. The local path divides the whole images into four patches to extract more detailed texture features and generates four corresponding virtual contrast patches. The four patches are integrated into a whole image for the objective function. In the training phase, a perceptual loss is employed to compare the virtual contrast-enhanced CT and actual contrast-enhanced CT. The total objective function combines the cost for both global and local paths for backpropagation.
Figure 3:
Figure 3:
The architecture of the virtual contrast-enhanced CT predictor. 1) The generator composes five layers of the encoder-decoder network and takes three continuous CT images as the input. The parallel connections of the layers can preserve both the high-level and low-level features. 2) The discriminator distinguishes the actual contrast-enhanced CT and virtual contrast-enhanced CT by extracting the features through two networks with the input of actual contrast-enhanced CT and virtual contrast-enhanced CT respectively.
Figure 4:
Figure 4:
The framework to obtain the pretrained model on the NLST dataset. 1) Each input CT image is transformed into four image intensity levels with the intensity coefficients α for [0.5, 1.0, 1.5, 2.0] of the original CT image, used as four image intensity categories for the pretext task without extra labeling. 2) The intensity-level classification network composes the generator network followed by three Fully Connected (FC) layers to obtain the maximum prediction probabilities for each intensity level. The trained generator is further employed as the pretrained model for the proposed generator.
Figure 5:
Figure 5:
The illustration for heatmaps of perceptual loss computed by averaging the feature-level differences between virtual and actual contrast CTs at the last four convolution blocks.
Figure 6:
Figure 6:
The assessments of two radiologists for the arterial, portal, and delayed phases in three aspects: overall case, organs, vascular structures by applying a 5-level score schema from 1 (poor) to 5 (excellent). The proposed method (red) outperforms the state-of-the-art methods: Contrastive GAN (green) [46] and CycleGAN (blue) [45] in all evaluation aspects.
Figure 7:
Figure 7:
The visualizations of virtual contrast-enhanced CT synthesized by the proposed framework, compared with the existing state-of-the-art methods [45] [46] in the arterial, portal, and delayed phases. (a) The pre-contrast CT. (b) The actual contrast-enhanced CT. (c) The virtual contrast-enhanced CT by Contrastive GAN [46]. (d) The virtual contrast-enhanced CT by CycleGAN [45]. (e) The virtual contrast-enhanced CT by proposed framework.
Figure 8:
Figure 8:
The illustration of the qualitative results for the ablation study by applying discriminator, perceptual loss, and dual-path training schema for the arterial, portal, and delayed phases. (a) The pre-contrast CT. (b) The actual contrast-enhanced CT. (c) The virtual contrast-enhanced CT without discriminator. (d) The virtual contrast-enhanced CT with only the global-path training. (e) The virtual contrast-enhanced CT without perceptual loss. (f) The proposed virtual contrast-enhanced CT.
Figure 9:
Figure 9:
The losses in the training (upper) and validation (lower) for i) baseline model [44]; ii) with pre-trained weights and perceptual loss; iii) with pre-trained weights, perceptual loss, and dual-path strategy (our proposed methods.)
Figure 10:
Figure 10:
The failure example illustrations of virtual contrast-enhanced CT synthesized by the proposed framework. (a) The pre-contrast CT. (b) The actual contrast-enhanced CT. (c) The virtual contrast-enhanced CT.

Similar articles

Cited by

References

    1. Liu J, Li M, Wang J, Wu F, Liu T, Pan Y, A survey of MRI-based brain tumor segmentation methods, Tsinghua Science and Technology 19 (6) (2014) 578–595.
    1. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al., A survey on deep learning in medical image analysis, Medical image analysis 42 (2017) 60–88. - PubMed
    1. Brenner DJ, Hricak H, Radiation exposure from medical imaging: time to regulate?, Jama 304 (2) (2010) 208–209. - PubMed
    1. Beckett KR, Moriarity AK, Langer JM, Safe use of contrast media: what the radiologist needs to know, Radiographics 35 (6) (2015) 1738–1750. - PubMed
    1. Andreucci M, Solomon R, Tasanarong A, Side effects of radiographic contrast media: pathogenesis, risk factors, and prevention, BioMed research international 2014. - PMC - PubMed

Publication types

MeSH terms