Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Oct;46(10):4392-4404.
doi: 10.1002/mp.13695. Epub 2019 Aug 20.

Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets

Affiliations

Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets

Jue Jiang et al. Med Phys. 2019 Oct.

Abstract

Purpose: Accurate tumor segmentation is a requirement for magnetic resonance (MR)-based radiotherapy. Lack of large expert annotated MR datasets makes training deep learning models difficult. Therefore, a cross-modality (MR-CT) deep learning segmentation approach that augments training data using pseudo MR images produced by transforming expert-segmented CT images was developed.

Methods: Eighty-one T2-weighted MRI scans from 28 patients with non-small cell lung cancers (nine with pretreatment and weekly MRI and the remainder with pre-treatment MRI scans) were analyzed. Cross-modality model encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning network. This model was used to translate 377 expert segmented non-small cell lung cancer CT scans from the Cancer Imaging Archive into pseudo MRI that served as additional training set. This method was benchmarked against shallow learning using random forest, standard data augmentation, and three state-of-the art adversarial learning-based cross-modality data (pseudo MR) augmentation methods. Segmentation accuracy was computed using Dice similarity coefficient (DSC), Hausdorff distance metrics, and volume ratio.

Results: The proposed approach produced the lowest statistical variability in the intensity distribution between pseudo and T2w MR images measured as Kullback-Leibler divergence of 0.069. This method produced the highest segmentation accuracy with a DSC of (0.75 ± 0.12) and the lowest Hausdorff distance of (9.36 mm ± 6.00 mm) on the test dataset using a U-Net structure. This approach produced highly similar estimations of tumor growth as an expert (P = 0.37).

Conclusions: A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross-modality information using a model that explicitly incorporates knowledge of tumors in modality translation to augment segmentation training. The results show the feasibility of the approach and the corresponding improvement over the state-of-the-art methods.

Keywords: cross-modality learning; data augmentation; generative adversarial networks; magnetic resonance imaging; tumor segmentation.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Pseudo MR image synthesized from a representative (a) CT image using (c) CycleGAN [9] (d) UNIT[8] and (e) proposed method. The corresponding T2w MRI scan for (A) is shown in (b).
Figure 2
Figure 2
Approach overview. (a) Pseudo MR synthesis, (b) MR segmentation training using pseudo MR with T2w MR. Visual description of losses used in training the networks in (a) and (b), namely, (c) GAN or adversarial loss, (d) cycle or cycle consistency loss, (e) tumor-attention loss enforced using structure and shape loss, and (f) segmentation loss computed using Dice overlap coefficient.
Figure 3
Figure 3
Network structure of the generator, discriminator and tumor attention net. The convolution the kernels size, the number of features are indicated as C and N. For instance, C3N512 denotes a convolution with kernel size of 3×3 and feature size of 512.
Figure 4
Figure 4
The schematic of segmentation architectures, consisting of (A) Residual FCN, (B) Dense-FCN and (c) U-Net. The convolutional kernel size and the number of features are indicated as C and N.
Figure 5:
Figure 5:
CT to pseudo MRI transformation using the analyzed methods. (a) Original CT; pseudo MR image produced using (b) CycleGAN[9]; (c) masked CycleGAN[22]; (d) UNIT[8] and (e) proposed method. In (f), the abscissa (x- axis) shows the normalized MRI intensity and the ordinate (y-axis) shows the frequency of the accumulated pixel numbers at that intensity for each method in the tumor region. The T2w MR intensity distribution within the tumor regions from the validation patients is also shown for comparison.
Figure 6:
Figure 6:
Segmentations from representative examples from five different patients using different data augmentation methods. (a) RF+fCRF segmentation[28]; (b) segmentation using only few T2W MRI; training combining expert-segmented T2w MRI with pseudo MRI produced using (c) CycleGAN[9]; (d) masked CycleGAN[22]; (e) UNIT[8]; (g) proposed method. (f) shows segmentation results from training using only the pseudo MRI produced using proposed method. The red contour corresponds to the expert delineation while the yellow contour corresponds to algorithm generated segmentation.
Figure 7:
Figure 7:
Longitudinal tumor volumes computed from three example patients using proposed method. (a) Volume growth velocity calculated by proposed versus expert delineation (b) Segmentation results from patient 7 and patient 8. The red contour corresponds to the expert delineation while the yellow contour corresponds to algorithm generated segmentation.

Similar articles

Cited by

References

    1. Nieh CF. Tumor delineation: the weakest link in the search for accuracy in radiotherapy, Medical Physics 2008; 33(4): 136. - PMC - PubMed
    1. Eisenhauer E, Therasse P, Bogaerts J et al., New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1), European journal of cancer, 2009, 45: 228–247. - PubMed
    1. Pollard JM, Wen Z, Sadagopan R, Wang J, Ibbott GS. The future of image-guided radiotherapy will be MR guided. The British journal of radiology 2017; 90(1073): 20160667. - PMC - PubMed
    1. Thompson RF, Valdes G, Fuller CD, Carpenter CM, Morin O, Aneja S, et al. The Future of Artificial Intelligence in Radiation Oncology. International Journal of Radiation Oncology• Biology• Physics 2018; 102(2): 247–248. - PubMed
    1. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets, in: Advances in Neural Information Processing Systems (NIPS) 2014; p. 2672–2680.

MeSH terms