Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Mar 11;20(6):1546.
doi: 10.3390/s20061546.

Semi-Supervised Nests of Melanocytes Segmentation Method Using Convolutional Autoencoders

Affiliations

Semi-Supervised Nests of Melanocytes Segmentation Method Using Convolutional Autoencoders

Dariusz Kucharski et al. Sensors (Basel). .

Abstract

In this research, we present a semi-supervised segmentation solution using convolutional autoencoders to solve the problem of segmentation tasks having a small number of ground-truth images. We evaluate the proposed deep network architecture for the detection of nests of nevus cells in histopathological images of skin specimens is an important step in dermatopathology. The diagnostic criteria based on the degree of uniformity and symmetry of border irregularities are particularly vital in dermatopathology, in order to distinguish between benign and malignant skin lesions. However, to the best of our knowledge, it is the first described method to segment the nests region. The novelty of our approach is not only the area of research, but, furthermore, we address a problem with a small ground-truth dataset. We propose an effective computer-vision based deep learning tool that can perform the nests segmentation based on an autoencoder architecture with two learning steps. Experimental results verified the effectiveness of the proposed approach and its ability to segment nests areas with Dice similarity coefficient 0.81, sensitivity 0.76, and specificity 0.94, which is a state-of-the-art result.

Keywords: autoencoders; computer vision; deep learning; epidermis; pathology; semi-supervised learning; skin.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Examples of nests of nevus cells (marked with arrows): (a) a dermal nest; (b) a nest at the tip of a rete ridge; and (c) a nest adjacent to the epidermal plate; note a strong pigmentation of cytoplasm in nevus cells. The structure of nests is highly not-uniform and varies between individual nests.
Figure 2
Figure 2
Examples of melanocytic lesions containing nests of nevus cells: (a) in junctional dysplastic nevi, nevus cells are typically arranged in cohesive nests along the dermal-epidermal junction nests and often join together; (b) in nevi, nests are often positioned at the tips of rete ridges; (c) in melanoma in situ, there are often large, confluent nests, irregular in shape and size, unevenly distributed along the dermal-epidermal junction; and (d) in SSM, nests are present above the suprapapillary plate.
Figure 3
Figure 3
Whole slide imaging: (a) an example of a WSI produced by a scanning system; (b) a WSI scanning system consists of a dedicated hardware and software.
Figure 4
Figure 4
Examples of generated patches of size 128×128 pixels each (windows of such size typically include enough context to label the central pixel as either “part of a nest” or “not part of a nest” with high confidence).
Figure 5
Figure 5
Schema of a basic autoencoder including the encoder, decoder, and code parts. The model contains an encoder function g(.) and a decoder function f(.) parameterized by ϕ and θ, respectively. The low-dimensional code learned for input x in the bottleneck layer is z and the reconstructed input is x.
Figure 6
Figure 6
Architecture of the proposed convolutional autoencoder. Each box corresponds to a multichannel feature map. The horizontal arrow denotes transfer between the encoding and decoding parts.
Figure 7
Figure 7
Outcomes of the first stage of autoencoder semi-supervised training process showing: (a) original patches and (b) reconstructed images.
Figure 8
Figure 8
Outcomes of the second training stage of the convolutional autoencoder: (a) original images (patches), (b) generated masks, and (c) ground-truth images.
Figure 9
Figure 9
Feature maps of the autoencoder convolutional layers: (a) original image, (b) generated mask, (c) ground-truth image, and (d) a partial feature map from latent layers (only a few more interesting activations were included).
Figure 10
Figure 10
Error rate for reconstruction and segmentation over the training and validation data.
Figure 11
Figure 11
Learning rate decay for each epoch for (a) reconstruction and (b) segmentation over the training and validation data.

Similar articles

Cited by

References

    1. Garbe C., Leiter U. Melanoma epidemiology and trends. Clin. Dermatol. 2009;27:3–9. doi: 10.1016/j.clindermatol.2008.09.001. - DOI - PubMed
    1. Lyon: International Agency for Research on Cancer Cancer Incidence in Five Continents Time Trends (Electronic Version) [(accessed on 13 February 2020)]; Available online: http://ci5.iarc.fr.
    1. Cancer Facts & Figures. [(accessed on 13 February 2020)];2016 Available online: http://www.cancer.org/research/cancerfactsstatistics/cancerfactsfigures2....
    1. Australian Bureau of Statistics 3303.0 Causes of Death. [(accessed on 13 February 2020)]; Available online: http://www.abs.gov.au/Causes-of-Death.
    1. Argenziano G., Soyer P.H., Giorgio V.D., Piccolo D., Carli P., Delfino M., Ferrari A., Hofmann-Wellenhof R., Massi D., Mazzocchetti G., et al. Interactive Atlas of Dermoscopy. Edra Medical Publishing and New Media; Milan, Italy: 2000.

LinkOut - more resources