Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Apr 20;15(1):87.
doi: 10.1186/s13014-020-01514-6.

Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data

Affiliations

Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data

Khaled Bousabarah et al. Radiat Oncol. .

Abstract

Introduction: Deep learning-based algorithms have demonstrated enormous performance in segmentation of medical images. We collected a dataset of multiparametric MRI and contour data acquired for use in radiosurgery, to evaluate the performance of deep convolutional neural networks (DCNN) in automatic segmentation of brain metastases (BM).

Methods: A conventional U-Net (cU-Net), a modified U-Net (moU-Net) and a U-Net trained only on BM smaller than 0.4 ml (sU-Net) were implemented. Performance was assessed on a separate test set employing sensitivity, specificity, average false positive rate (AFPR), the dice similarity coefficient (DSC), Bland-Altman analysis and the concordance correlation coefficient (CCC).

Results: A dataset of 509 patients (1223 BM) was split into a training set (469 pts) and a test set (40 pts). A combination of all trained networks was the most sensitive (0.82) while maintaining a specificity 0.83. The same model achieved a sensitivity of 0.97 and a specificity of 0.94 when considering only lesions larger than 0.06 ml (75% of all lesions). Type of primary cancer had no significant influence on the mean DSC per lesion (p = 0.60). Agreement between manually and automatically assessed tumor volumes as quantified by a CCC of 0.87 (95% CI, 0.77-0.93), was excellent.

Conclusion: Using a dataset which properly captured the variation in imaging appearance observed in clinical practice, we were able to conclude that DCNNs reach clinically relevant performance for most lesions. Clinical applicability is currently limited by the size of the target lesion. Further studies should address if small targets are accurately represented in the test data.

Keywords: Brain metastasis; Deep learning; Magnetic resonance imaging; Segmentation; Stereotactic radiosurgery.

PubMed Disclaimer

Conflict of interest statement

The authors of this manuscript declare relationships with the following companies:

Maximilian Ruge received research support for an unrelated project and received remuneration for activities as a speaker from Accuray.

Jan Borggrefe is a scientific speaker for Philips Healthcare.

Nils Große-Hokamp is a scientific speaker for Philips Healthcare and received research support for an unrelated project.

Figures

Fig. 1
Fig. 1
Architecture of the trained U-Net. All convolutions used filters with a kernel size of 3. Before all convolutions, instance normalization and the activation function (leaky ReLU) were applied to the input. The residual block contained two such convolutions. Downsampling in the encoding layer was realized using a convolution with a stride of 2. In the output layers the sigmoidal function is applied to the DCNN’s output. For the moU-Net, two intermediate output layers are added (dashed red lines). The original contour data is then used to compute the cost function
Fig. 2
Fig. 2
Sensitivity and Specificity of the developed networks plotted against the minimum volume of the considered target lesions. Dashed lines depict the four quartiles (Qi) of the measured volumes of target lesions in the test data (Q1 = 0.06 ml, Q2 = 0.29 ml, Q3 = 1.29 ml, Q4 = 8.05 ml). The largest drop in both sensitivity and specificity is observed for lesions smaller than 0.06 ml. At this threshold the sensitivities and specificities are 0.97/0.92 and 0.92/1.00 for NetSUM and NetMV respectively
Fig. 3
Fig. 3
a: Bland-Altmann plot visualizing agreement between manually delineated ground truth and automatic segmentation by DCNN per patient. The middle horizontal line is drawn at the mean difference (0.15 ml) between both measurements and the lines below and above at the limits of agreement (95% CI). b: Volume predicted by DCNN plotted against manual segmentation. The concordance correlation coefficient (CCC) measuring deviation from the diagonal line depicting perfect agreement between both volumes was 0.87
Fig. 4
Fig. 4
Samples of T1c images (if not otherwise specified) containing ground truth segmentations (blue lines) and segmentations by DCNN (purple lines). a: Randomly selected samples of detected lesions. Number in bottom left of each image is the percentage of segmentations with similar quality of segmentation measured by DSC. b: Samples of undetected lesions. Atypical BM: Largest undetected lesion with minor contrast-uptake in rim. Wrong T1c: Second-largest undetected lesion where T1c images came from a different study than T2 and FLAIR images. Small BM: Randomly selected samples of undetected lesions
Fig. 5
Fig. 5
Results per lesion for all algorithms (cU-Net, moU-Net, sU-Net and their combination) and ensemble building through summation and majority voting. A lesion in the test set (40 patients, 83 lesions) was considered detected if it overlapped with a segmentation produced by the respective algorithm. The degree of overlap and thus the quality of the segmentation was assessed using the dice similarity coefficient (DSC). The dashed blue line is the threshold at which a lesion was defined as small (< 0.4 ml) and thus used to train the sU-Net

References

    1. O’Beirn M, Benghiat H, Meade S, et al. The expanding role of radiosurgery for brain metastases. Medicines. 2018;5:90. doi: 10.3390/medicines5030090. - DOI - PMC - PubMed
    1. Nayak L, Lee EQ, Wen PY. Epidemiology of brain metastases. Curr Oncol Rep. 2012;14:48–54. doi: 10.1007/s11912-011-0203-y. - DOI - PubMed
    1. Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys. 2019;29:102–127. doi: 10.1016/j.zemedi.2018.11.002. - DOI - PubMed
    1. Shirokikh B, Dalechina A, Shevtsov A et al Deep learning for brain tumor segmentation in radiosurgery: prospective clinical evaluation. arXiv preprint arXiv: 2019 190902799.
    1. Losch M. Detection and segmentation of brain metastases with deep convolutional networks. 2015.

LinkOut - more resources