An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets
- PMID: 30609062
- DOI: 10.1002/mp.13375
An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets
Abstract
Purpose: Accurate segmentation of the breast is required for breast density estimation and the assessment of background parenchymal enhancement, both of which have been shown to be related to breast cancer risk. The MRI breast segmentation task is challenging, and recent work has demonstrated that convolutional neural networks perform well for this task. In this study, we have investigated the performance of several two-dimensional (2D) U-Net and three-dimensional (3D) U-Net configurations using both fat-suppressed and nonfat-suppressed images. We have also assessed the effect of changing the number and quality of the ground truth segmentations.
Materials and methods: We designed eight studies to investigate the effect of input types and the dimensionality of the U-Net operations for the breast MRI segmentation. Our training data contained 70 whole breast volumes of T1-weighted sequences without fat suppression (WOFS) and with fat suppression (FS). For each subject, we registered the WOFS and FS volumes together before manually segmenting the breast to generate ground truth. We compared four different input types to the U-nets: WOFS, FS, MIXED (WOFS and FS images treated as separate samples), and MULTI (WOFS and FS images combined into a single multichannel image). We trained 2D U-Nets and 3D U-Nets with these data, which resulted in our eight studies (2D-WOFS, 3D-WOFS, 2D-FS, 3D-FS, 2D-MIXED, 3D-MIXED, 2D-MULTI, and 3D-MULT). For each of these studies, we performed a systematic grid search to tune the hyperparameters of the U-Nets. A separate validation set with 15 whole breast volumes was used for hyperparameter tuning. We performed Kruskal-Walis test on the results of our hyperparameter tuning and did not find a statistically significant difference in the ten top models of each study. For this reason, we chose the best model as the model with the highest mean dice similarity coefficient (DSC) value on the validation set. The reported test results are the results of the top model of each study on our test set which contained 19 whole breast volumes annotated by three readers fused with the STAPLE algorithm. We also investigated the effect of the quality of the training annotations and the number of training samples for this task.
Results: The study with the highest average DSC result was 3D-MULTI with 0.96 ± 0.02. The second highest average is 2D WOFS (0.96 ± 0.03), and the third is 2D MULTI (0.96 ± 0.03). We performed the Kruskal-Wallis one-way ANOVA test with Dunn's multiple comparison tests using Bonferroni P-value correction on the results of the selected model of each study and found that 3D-MULTI, 2D-MULTI, 3D-WOFS, 2D-WOFS, 2D-FS, and 3D-FS were not statistically different in their distributions, which indicates that comparable results could be obtained in fat-suppressed and nonfat-suppressed volumes and that there is no significant difference between the 3D and 2D approach. Our results also suggested that the networks trained on single sequence images or multiple sequence images organized in multichannel images perform better than the models trained on a mixture of volumes from different sequences. Our investigation of the size of the training set revealed that training a U-Net in this domain only requires a modest amount of training data and results obtained with 49 and 70 training datasets were not significantly different.
Conclusions: To summarize, we investigated the use of 2D U-Nets and 3D U-Nets for breast volume segmentation in T1 fat-suppressed and without fat-suppressed volumes. Although our highest score was obtained in the 3D MULTI study, when we took advantage of information in both fat-suppressed and nonfat-suppressed volumes and their 3D structure, all of the methods we explored gave accurate segmentations with an average DSC on >94% demonstrating that the U-Net is a robust segmentation method for breast MRI volumes.
Keywords: breast MRI; breast segmentation; convolutional neural networks.
© 2019 American Association of Physicists in Medicine.
Similar articles
-
Using deep learning to segment breast and fibroglandular tissue in MRI volumes.Med Phys. 2017 Feb;44(2):533-546. doi: 10.1002/mp.12079. Med Phys. 2017. PMID: 28035663
-
Automated segmentation of the human supraclavicular fat depot via deep neural network in water-fat separated magnetic resonance images.Quant Imaging Med Surg. 2023 Jul 1;13(7):4699-4715. doi: 10.21037/qims-22-304. Epub 2023 Mar 14. Quant Imaging Med Surg. 2023. PMID: 37456284 Free PMC article.
-
Visual ensemble selection of deep convolutional neural networks for 3D segmentation of breast tumors on dynamic contrast enhanced MRI.Eur Radiol. 2023 Feb;33(2):959-969. doi: 10.1007/s00330-022-09113-7. Epub 2022 Sep 8. Eur Radiol. 2023. PMID: 36074262 Free PMC article.
-
Automatic Segmentation of Multiple Organs on 3D CT Images by Using Deep Learning Approaches.Adv Exp Med Biol. 2020;1213:135-147. doi: 10.1007/978-3-030-33128-3_9. Adv Exp Med Biol. 2020. PMID: 32030668 Review.
-
Automated vessel segmentation in lung CT and CTA images via deep neural networks.J Xray Sci Technol. 2021;29(6):1123-1137. doi: 10.3233/XST-210955. J Xray Sci Technol. 2021. PMID: 34421004 Review.
Cited by
-
Machine learning on MRI radiomic features: identification of molecular subtype alteration in breast cancer after neoadjuvant therapy.Eur Radiol. 2023 Apr;33(4):2965-2974. doi: 10.1007/s00330-022-09264-7. Epub 2022 Nov 23. Eur Radiol. 2023. PMID: 36418622
-
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model.J Digit Imaging. 2021 Aug;34(4):877-887. doi: 10.1007/s10278-021-00472-z. Epub 2021 Jul 9. J Digit Imaging. 2021. PMID: 34244879 Free PMC article.
-
An automated computational biomechanics workflow for improving breast cancer diagnosis and treatment.Interface Focus. 2019 Aug 6;9(4):20190034. doi: 10.1098/rsfs.2019.0034. Epub 2019 Jun 14. Interface Focus. 2019. PMID: 31263540 Free PMC article.
-
Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging.Contrast Media Mol Imaging. 2020 Aug 28;2020:6805710. doi: 10.1155/2020/6805710. eCollection 2020. Contrast Media Mol Imaging. 2020. PMID: 32934610 Free PMC article. Review.
-
Machine learning in breast MRI.J Magn Reson Imaging. 2020 Oct;52(4):998-1018. doi: 10.1002/jmri.26852. Epub 2019 Jul 5. J Magn Reson Imaging. 2020. PMID: 31276247 Free PMC article. Review.
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical