Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jun;46(6):2669-2682.
doi: 10.1002/mp.13553. Epub 2019 May 6.

Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images

Affiliations

Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images

Nuo Tong et al. Med Phys. 2019 Jun.

Abstract

Purpose: Image-guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low-field MRI.

Methods and material: A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A convolutional neural network (CNN)-based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR-guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation.

Results: The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low-field MR dataset, the following average Dice's indices were obtained using improved SC-GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU.

Conclusion: The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low-field MR images acquired on a MR-guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.

Keywords: fully convolutional DenseNet; generative adversarial network; head and neck images; shape representation loss.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The overall structure of the proposed SC‐GAN network. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 2
Figure 2
The structure of dense block and localization block. (a) Dense block. (b) Localization block. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 3
Figure 3
The architecture of the segmentation network. N represents the number of organs to be segmented. For concise illustration, batch normalization and rectified linear unit are omitted from this figure. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 4
Figure 4
The architecture of shape representation model. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 5
Figure 5
The architecture of the discriminative network. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 6
Figure 6
Learning curves of the discriminator, the generator (i.e., segmentation network), and the shape representation model. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 7
Figure 7
Examples of the H&N CT segmentation results by generative adversarial network (GAN), shape constraint (SC)‐DenseNet, SCGAN‐ResNet, and SCGAN‐DenseNet. The first column shows the ground truth, the second, third, fourth, and fifth columns present the segmentation results by GAN, SC‐DenseNet, SCGAN‐ResNet, and SCGAN‐DenseNet, respectively. Brainstem (purple), optical chiasm (dark green), mandible (green), left and right optical nerves (orange and light orange), left and right parotid glands (blue and yellow), left and right submandibular glands (pink and light green). The single, double, and blunt arrows denote false‐positive islands, undersegmentations, and mis‐segmentations, respectively. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 8
Figure 8
Examples of the H&N MRI segmentation results by generative adversarial network (GAN), shape constraint (SC)‐DenseNet, SCGAN‐ResNet, and SCGAN‐DenseNet. The first column shows the ground truth, the second, third, fourth, and fifth columns present the segmentation results by GAN, SC‐DenseNet, SCGAN‐ResNet, and SCGAN‐DenseNet, respectively. Brainstem (purple), optical chiasm (blue), larynx (orange), mandible (grass green), left and right optical nerves (yellow and light blue), left and right parotid glands (pink and green), pharynx (blue). The single, double, and blunt arrows denote false‐positive islands, undersegmentations, and mis‐segmentations, respectively. [Color figure can be viewed at wileyonlinelibrary.com]

Similar articles

Cited by

References

    1. Rorke MAO, Ellison MV, Murray LJ, Moran M, James J, Anderson LA. Human papillomavirus related head and neck cancer survival: a systematic review and meta‐analysis. Oral Oncol. 2012;48:1191–1201. - PubMed
    1. Gutiontov SI, Shin EJ, Lok B, Lee NY, Cabanillas R. Intensity‐modulated radiotherapy for head and neck surgeons. Head Neck. 2016;38(Suppl 1):E2368–E2373. - PMC - PubMed
    1. Nelms BE, Tomé WA, Robinson G, Heeler JW. Variations in the contouring of organs at risks: test case from a patient with oropharyngeal cancer. Int J Radiat Oncol Biol Phys. 2012;82:368–378. - PubMed
    1. Castelli J, Simon A, Lafond C, et al. Adaptive radiotherapy for head and neck cancer. Acta Oncol. (Madr). 2018;57:1284–1292. - PubMed
    1. Pollard JM, Wen Z, Sadagopan R, Wang J. The future of image‐guided radiotherapy will be MR guided. Br J Radiol. 2017;90:20160667. - PMC - PubMed

MeSH terms