Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2024 May 16:arXiv:2405.09851v1.

Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images - Nevus & Melanoma

Affiliations

Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images - Nevus & Melanoma

Yi Cui et al. ArXiv. .

Update in

Abstract

Automated region of interest detection in histopathological image analysis is a challenging and important topic with tremendous potential impact on clinical practice. The deep-learning methods used in computational pathology may help us to reduce costs and increase the speed and accuracy of cancer diagnosis. We started with the UNC Melanocytic Tumor Dataset cohort that contains 160 hematoxylin and eosin whole-slide images of primary melanomas (86) and nevi (74). We randomly assigned 80% (134) as a training set and built an in-house deep-learning method to allow for classification, at the slide level, of nevi and melanomas. The proposed method performed well on the other 20% (26) test dataset; the accuracy of the slide classification task was 92.3% and our model also performed well in terms of predicting the region of interest annotated by the pathologists, showing excellent performance of our model on melanocytic skin tumors. Even though we tested the experiments on the skin tumor dataset, our work could also be extended to other medical image detection problems to benefit the clinical evaluation and diagnosis of different tumors.

Keywords: Deep Learning; Melanocytic Skin Tumor; Melanoma; Nevus; Region of Interest Detection.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
ROI was annotated by black dots determined by pathologists. The predicted ROI was bounded by the green line on the right.
Fig. 2.
Fig. 2.
Overview of the proposed detection framework. (a) The Melanocytic Tumor Dataset: Randomly assigned 80% (134 WSIs) of data as the training set and 20% (26 WSIs) of data as the testing set. (b) Preprocessing: color normalization [21] and data augmentation. (c) Extract melanoma, nevus and other patches from training data. (d) Model Training: Trained a 3-class patch classifier based on extracted patches. (e) Slide Classification: For each slide, generated predicted scores for all patches and calculated patch as well as slide classification accuracy. (f) Patch Ranking: Ranked all patches from a slide based on the corresponding predicted scores in the context of melanoma or nevus, depending on the slide classification result. (g) Visualization: Generated visualization results based on predicted scores.
Fig. 3.
Fig. 3.
Visualization results for a melanoma sample and a nevus sample.
Fig. 4.
Fig. 4.
Visualization results for a misclassified case 1.
Fig. 5.
Fig. 5.
Visualization results for a misclassified case 2.

References

    1. Ankerst M., Breunig M.M., Kriegel H.P., Sander J.: OPTICS: Ordering Points to Identify the Clustering Structure. SIGMOD Record (ACM Special Interest Group on Management of Data) 28(2), 49–60 (1999)
    1. Braman N., Adoui M.E., Vulchi M., Turk P., Etesami M., Fu P., Bera K., Drisis S., Varadan V., Plecha D., Benjelloun M., Abraham J., Madabhushi A.: Deep learning-based prediction of response to HER2-targeted neoadjuvant chemotherapy from pre-treatment dynamic breast MRI: A multi-institutional validation study. http://arxiv.org/abs/2001.08570 (2020)
    1. Brochez L., Verhaeghe E., Grosshans E., Haneke E., Piérard G., Ruiter D., Naeyaert J.M.: Inter-observer variation in the histopathological diagnosis of clinically suspicious pigmented skin lesions. Journal of Pathology 196(4), 459–466 (2002) - PubMed
    1. Çiçek Ö., Abdulkadir A., Lienkamp S.S., Brox T., Ronneberger O.: 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin S., Joskowicz L., Sabuncu M.R., Unal G., Wells W. (eds.) Medical Image Computing and Computer-Assisted Intervention. pp. 424–432. Springer International Publishing, Cham: (2016)
    1. Chen L.C., Papandreou G., Kokkinos I., Murphy K., Yuille A.L.: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40(4), 834–848 (2018) - PubMed

Publication types

LinkOut - more resources