Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Oct 2;15(19):4829.
doi: 10.3390/cancers15194829.

Deep Learning for Fully Automatic Tumor Segmentation on Serially Acquired Dynamic Contrast-Enhanced MRI Images of Triple-Negative Breast Cancer

Affiliations

Deep Learning for Fully Automatic Tumor Segmentation on Serially Acquired Dynamic Contrast-Enhanced MRI Images of Triple-Negative Breast Cancer

Zhan Xu et al. Cancers (Basel). .

Abstract

Accurate tumor segmentation is required for quantitative image analyses, which are increasingly used for evaluation of tumors. We developed a fully automated and high-performance segmentation model of triple-negative breast cancer using a self-configurable deep learning framework and a large set of dynamic contrast-enhanced MRI images acquired serially over the patients' treatment course. Among all models, the top-performing one that was trained with the images across different time points of a treatment course yielded a Dice similarity coefficient of 93% and a sensitivity of 96% on baseline images. The top-performing model also produced accurate tumor size measurements, which is valuable for practical clinical applications.

Keywords: deep learning; triple-negative breast cancer; tumor segmentation.

PubMed Disclaimer

Conflict of interest statement

The authors would like to make the following disclosures:

  1. K.K.H. serves on the Medical Advisory Board for ArmadaHealth, AstraZeneca, and receives research funding from Cairn Surgical, Eli Lilly&Co., and Lumicell.

  2. K.H. is currently receiving research funding from Siemens Healthineers and has received research funding from GE.

  3. J.K.L. received grant or research support from Novartis, Medivation/Pfizer, Genentech, GSK, EMD-Serono, AstraZeneca, Medimmune, Zenith, Merck; participated in Speaker’s Bureau for MedLearning, Physician’s Education Resource, Prime Oncology, Medscape, Clinical Care Options, Medpage; and receives royalty from UpToDate.

  4. Spouse of A.T works for Eli Lilly.

  5. D.T. declares research contracts with Pfizer, Novartis, and Ployphor and is a consultant of AstraZeneca, GlaxoSmithKline, OncoPep, Gilead, Novartis, Pfizer, Personalis, and Sermonix.

  6. W.Y. receives royalties from Elsevier.

  7. J.M. is a consultant of C4 Imaging, L.L.C., and an inventor of United States patents licensed to Siemens Healthineers and GE Healthcare.

  8. For the remaining authors, none were declared.

The funders had no role in the design of this study; in the collection, analyses, or interpretation of data; in the writing of manuscript; or in the decision to publish results.

Figures

Figure 1
Figure 1
Segmentation performance of nnU-Net models with different combinations of BL DCE images and semiquantitative parametric maps. The DSC and sensitivity were measured at the subject level using manually labeled masks as the reference standard and were then averaged across the BL test set. (A) The boxplots of each set of results, first and third quartiles (lower and upper ends of box, respectively), the min and max limits (whiskers) at 1.5 interquartile away from the first and third quartiles; median (horizontal line in box), mean (x), and outliers (discrete data points) were presented. (The letters above the boxplots indicated statistical significance between that metric and the reference metric, which was labeled with the same letter and an asterisk on top). (B) The detailed quantitative results used for the boxplots in (A).
Figure 2
Figure 2
Automated tumor segmentation with and without inclusion of central necrosis and biopsy clips. (A) nnU-Net_Excl and nnU-Net_Incl on the same test cases had similar DSCs (p = 0.27). (B) Tumor sizes based on reference masks (ref: green) were similar to those estimated with nnU-Net_Excl (p = 0.14) and nnU-Net_Incl (p = 0.58). (C) Automated masks without inclusion (Excl: blue) and with inclusion (Incl: red) of central necrosis and biopsy clips overlaying corresponding reference (green) mask of a representative subject. The subimage within the dashed box has been zoomed in and displayed as the background in the Excl and Incl images.
Figure 3
Figure 3
Segmentation performance of nnU-Net models using data from various time points. DSC (A) and sensitivity (B) of the different models applied to the corresponding testing dataset. Blue bars on top of the dataset indicate significant difference in paired Wilcoxon signed-rank test (p < 0.05, indicated by black asterisks). (C) The detailed quantitative results used for the boxplots (A,B). (D) Two representative subjects and the predicted segmentation performance using the nnU-Net_3tpt model. The reference mask is the union of blue and red masks.
Figure 4
Figure 4
Segmentation performance of nnU-Net models by tumor size. (A) DSC of nnU-Net_3tpt across various tumor sizes in the 3tpt test set. Red bars indicate significant difference in unpaired Wilcoxon rank-sum test adjusted at p < 0.016 (indicated by red asterisks). (B) DSC of nnU-Net_3tpt and nnU-Net_BL applied on BL test sets across various tumor sizes. Blue bar indicates statistically significant difference in paired Wilcoxon signed-rank test (p < 0.05, indicated by blue asterisks). (C) The detailed quantitative results used for the boxplots in (A,B).
Figure 5
Figure 5
Comparison of tumor size between predicted segmentation using nnU-Net_3tpt and reference tumor mask. Shown are linear relationships between tumor size of predicted segmentation and reference at BL (A), C2 (B), and C4 (C). The best-fit linear regression was represented by the solid line, while the 95% confidence interval bands were denoted by dashed lines.

References

    1. Wu Q., Siddharth S., Sharma D. Triple Negative Breast Cancer: A Mountain Yet to Be Scaled Despite the Triumphs. Cancers. 2021;13:3697. doi: 10.3390/cancers13153697. - DOI - PMC - PubMed
    1. Dent R., Trudeau M., Pritchard K.I., Hanna W.M., Kahn H.K., Sawka C.A., Lickley L.A., Rawlinson E., Sun P., Narod S.A. Triple-Negative Breast Cancer: Clinical Features and Patterns of Recurrence. Clin. Cancer Res. 2007;13:4429–4434. doi: 10.1158/1078-0432.CCR-06-3045. - DOI - PubMed
    1. Zhang Y., Chan S., Park V.Y., Chang K.-T., Mehta S., Kim M.J., Combs F.J., Chang P., Chow D., Parajuli R., et al. Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images. Acad. Radiol. 2022;29:S135–S144. doi: 10.1016/j.acra.2020.12.001. - DOI - PMC - PubMed
    1. El Adoui M., Drisis S., Benjelloun M. Multi-input deep learning architecture for predicting breast tumor response to chemotherapy using quantitative MR images. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1491–1500. doi: 10.1007/s11548-020-02209-9. - DOI - PubMed
    1. Ha R., Chin C., Karcich J., Liu M.Z., Chang P., Mutasa S., Van Sant E.P., Wynn R.T., Connolly E., Jambawalikar S. Prior to Initiation of Chemotherapy, Can We Predict Breast Tumor Response? Deep Learning Convolutional Neural Networks Approach Using a Breast MRI Tumor Dataset. J. Digit. Imaging. 2018;32:693–701. doi: 10.1007/s10278-018-0144-1. - DOI - PMC - PubMed

LinkOut - more resources