Machine learning techniques for mitoses classification
- PMID: 33302246
- PMCID: PMC7855641
- DOI: 10.1016/j.compmedimag.2020.101832
Machine learning techniques for mitoses classification
Erratum in
-
Corrigendum to "Machine learning techniques for mitoses classification" [Comput. Med. Imaging Graphics 87 January (2021) 101832].Comput Med Imaging Graph. 2021 Jun;90:101903. doi: 10.1016/j.compmedimag.2021.101903. Epub 2021 Apr 10. Comput Med Imaging Graph. 2021. PMID: 33845431 No abstract available.
Abstract
Background: Pathologists analyze biopsy material at both the cellular and structural level to determine diagnosis and cancer stage. Mitotic figures are surrogate biomarkers of cellular proliferation that can provide prognostic information; thus, their precise detection is an important factor for clinical care. Convolutional Neural Networks (CNNs) have shown remarkable performance on several recognition tasks. Utilizing CNNs for mitosis classification may aid pathologists to improve the detection accuracy.
Methods: We studied two state-of-the-art CNN-based models, ESPNet and DenseNet, for mitosis classification on six whole slide images of skin biopsies and compared their quantitative performance in terms of sensitivity, specificity, and F-score. We used raw RGB images of mitosis and non-mitosis samples with their corresponding labels as training input. In order to compare with other work, we studied the performance of these classifiers and two other architectures, ResNet and ShuffleNet, on the publicly available MITOS breast biopsy dataset and compared the performance of all four in terms of precision, recall, and F-score (which are standard for this data set), architecture, training time and inference time.
Results: The ESPNet and DenseNet results on our primary melanoma dataset had a sensitivity of 0.976 and 0.968, and a specificity of 0.987 and 0.995, respectively, with F-scores of .968 and .976, respectively. On the MITOS dataset, ESPNet and DenseNet showed a sensitivity of 0.866 and 0.916, and a specificity of 0.973 and 0.980, respectively. The MITOS results using DenseNet had a precision of 0.939, recall of 0.916, and F-score of 0.927. The best published result on MITOS (Saha et al. 2018) reported precision of 0.92, recall of 0.88, and F-score of 0.90. In our architecture comparisons on MITOS, we found that DenseNet beats the others in terms of F-Score (DenseNet 0.927, ESPNet 0.890, ResNet 0.865, ShuffleNet 0.847) and especially Recall (DenseNet 0.916, ESPNet 0.866, ResNet 0.807, ShuffleNet 0.753), while ResNet and ESPNet have much faster inference times (ResNet 6 s, ESPNet 8 s, DenseNet 31 s). ResNet is faster than ESPNet, but ESPNet has a higher F-Score and Recall than ResNet, making it a good compromise solution.
Conclusion: We studied several state-of-the-art CNNs for detecting mitotic figures in whole slide biopsy images. We evaluated two CNNs on a melanoma cancer dataset and then compared four CNNs on a public breast cancer data set, using the same methodology on both. Our methodology and architecture for mitosis finding in both melanoma and breast cancer whole slide images has been thoroughly tested and is likely to be useful for finding mitoses in any whole slide biopsy images.
Keywords: Convolutional neural networks; Machine learning; Melanoma; Mitoses; Pathology.
Copyright © 2020 Elsevier Ltd. All rights reserved.
Conflict of interest statement
Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Figures




References
-
- Saha M, Chakraborty C, and Racoceanu D, Efficient deep learning model for mitosis detection using breast histopathology images. Computerized Medical Imaging and Graphics, 2018. 64: p. 29–40. - PubMed
-
- Society AC, Cancer facts & figures. American Cancer Society, 2016.
-
- Cireşan DC, et al. Mitosis detection in breast cancer histology images with deep neural networks in International Conference on Medical Image Computing and Computer-assisted Intervention. 2013. Springer. - PubMed