Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 1;15(1):22404.
doi: 10.1038/s41598-025-05675-w.

Automatic melanoma detection using an optimized five-stream convolutional neural network

Affiliations

Automatic melanoma detection using an optimized five-stream convolutional neural network

Vida Esmaeili et al. Sci Rep. .

Abstract

Melanoma is among the deadliest forms of malignant skin cancer, with the number of cases increasing dramatically worldwide. Its early and accurate diagnosis is crucial for effective treatment. However, automatic melanoma detection has several significant challenges. These challenges include the lack of a balanced dataset, high variability within melanoma lesions, differences in the locations of skin lesions in images, the similarity between different skin lesions, and the presence of various artifacts. In addition, the previous deep-learning techniques for diagnosing melanoma cannot recognize the unique relations between samples. For this reason, these convolutional neural networks (CNNs) cannot perceive the changed or rotated image samples as similar. To address these issues in this paper, we have done pre-processing such as hair removal, balancing the skin lesion images using a generative adversarial network (GAN)-based method, denoising using a CNN-based method, and image enhancement. In addition, we propose four new methods to extract key features: the hybrid ULBP and Chan-Vese algorithm (ULBP-CVA), multi-block ULBP on the nine suggested planes (multi-block ULBP-NP), a combination of multi-block Gabor magnitude and phase with ULBP-NP (multi-block GULBP-NP), and combining multi-block gradient magnitude and orientation with ULBP-NP (multi-block gradient ULBP-NP). We suggest nine planes to grab the most vital information about skin lesions in any direction for accurate coding. These introduced planes can capture synchronous spatial and local variations. Hence, very similar lesions can be differentiated by revealing small changes in these small planes. Finally, we propose an optimized four-stream CNN (OFSCNN) for classification. It can simultaneously classify the lesion color, lesion edges, texture features, local-spatial frequency features, and multi-oriented gradient features. The simulation results of our proposed method are promising compared to the most relevant state-of-the-art methods for melanoma detection in dermoscopy images. Our proposed method has automatically detected melanoma in 99.8%, 99.9%, 99.62%, and 99.6% of the HAM 10000, ISIC 2024, ISIC 2017, and ISIC 2016 datasets, respectively.

Keywords: Classification; Convolutional neural network; Denoising; Melanoma detection; Skin cancer.

PubMed Disclaimer

Conflict of interest statement

Competing interests: The authors declare no competing interests. Ethics approval: This manuscript, or a large part of it, has not been published, was not, and is not being submitted to any other journal. All text and graphics, except for those marked with sources, are original works of the authors. All authors each made a significant contribution to the research reported and have read and approved the submitted manuscript.

Figures

Fig. 1
Fig. 1
The general framework of the proposed method.
Fig. 2
Fig. 2
The sequential flow between pre-processing, feature extraction, and classification.
Fig. 3
Fig. 3
The flow diagram of the Pre-processing step.
Fig. 4
Fig. 4
The hair removal process. (a) Original skin lesion image; (b) Grayscale cropped image; (c) Morphological closing on the grayscale image; (d) image subtraction; (e) Binary image after closing; (f) Binary image after dilation; (g) the image with the hair removed.
Fig. 5
Fig. 5
(a) Several samples of real images; (b) a sample of newly generated skin lesion images using the GAN-based method.
Fig. 6
Fig. 6
The architecture of pre-trained denoising network.
Fig. 7
Fig. 7
The ULBP computing.
Fig. 8
Fig. 8
An example of applying the ULBP on a skin lesion image.
Fig. 9
Fig. 9
The formula image curve of the CVA for extracting border irregularity and asymmetry.
Fig. 10
Fig. 10
Creating a 3-dimensional volume by stacking multi-blocks of the skin lesion. (a) The skin lesion image; (b) converting to gray level; (c) cropped to multi-blocks; (d) creating a 3-dimensional volume.
Fig. 11
Fig. 11
Applying the suggested nine planes on the lesion block sequences (left) and achieving the multi-block ULBP-NP histogram by concatenating the ULBP histograms of each plane (right).
Fig. 12
Fig. 12
The position of neighboring pixel (formula image = 2) on the second plane.
Fig. 13
Fig. 13
Obtaining the Gabor phase and Gabor magnitude.
Fig. 14
Fig. 14
The process of making the proposed multi-block GULBP-NP map.
Fig. 15
Fig. 15
The process of creating the proposed multi-block gradient ULBP-NP map.
Fig. 16
Fig. 16
The proposed OFSCNN method for melanoma detection. (a) Skin lesion image; (b) output of the ULBP-CVA; (c) the multi-block ULBP-NP map; (d) the multi-block GULBP-NP map; (e) the multi-block gradient ULBP-NP map; (f) the OFCNN architecture.
Fig. 17
Fig. 17
The image sample of the used datasets.
Fig. 18
Fig. 18
(a) A real image; (b) a sample of newly generated skin lesion images using the GAN-based method.
Fig. 19
Fig. 19
The denoised images using the CNN-based method.
Fig. 20
Fig. 20
Enhancing the images (smoothing by 0formula image and sharpening by formula image1).
Fig. 21
Fig. 21
Two samples before and after the hair removal process.
Fig. 22
Fig. 22
An example of applying the histogram equalization on a skin lesion image.
Fig. 23
Fig. 23
Two samples of extracting the border and edge of skin lesion using the ULBP-CVA. The formula image curve of the CVA on the ULBP image (left) and the image obtained from the ULBP-CVA (right).
Fig. 24
Fig. 24
Evaluating the quality of the generated, denoised, and enhanced synthetic images using the FID metric.
Fig. 25
Fig. 25
Comparing the accuracy rate between other methods and ours on the ISIC 2020 dataset.
Fig. 26
Fig. 26
Comparing the recall rate between our work and other works (i.e., Menegola et al., Vasconcelos et al., Oliveira et al., and Jaber et al.) on the ISIC 2016 dataset.
Fig. 27
Fig. 27
Comparing the recall rate between our work and other works (i.e., Bi et al., Li and Shen, Guo et al., and Jaber et al.) on the ISIC 2017 dataset.
Fig. 28
Fig. 28
Comparing the precision rate between our work and other works (i.e., Alwakid et al. (CNN learning model), Garg et al. (CNN with transfer learning), Lilhore et al. (Optimized hybrid MobileNet-V3), Musthafa et al. (Sophisticated CNN), and Houssein (Deep CNN) on the HAM 10000 dataset.
Fig. 29
Fig. 29
Evaluating the impact of our feature extractors on the melanoma detection rate.
Fig. 30
Fig. 30
The extracted features in different layers of our proposed network using the skin lesion image (a), ULBP-CVA image (b), multi-block ULBP-NP map (c), multi-block GULBP-NP map (d), and multi-block gradient ULBP-NP map (e).
Fig. 31
Fig. 31
Comparing the training time between our CNN model and other models.
Fig. 32
Fig. 32
Our proposed network’s accuracy and loss curves.
Fig. 33
Fig. 33
Evaluation of our OFSCNN and other state-of-the-arts (i.e., Customized VGG-16, ResNet-18, EfficientNet, and DenseNet-121) for melanoma detection.
Fig. 34
Fig. 34
Comparing the performance of different optimizers.
Fig. 35
Fig. 35
Comparing the accuracy rate with and without the GAN-based data augmentation method on the ISIC 2017 dataset.
Fig. 36
Fig. 36
Comparing the accuracy rate of melanoma detection using different numbers of local neighboring points and radius of the circle in the ULBP.
Fig. 37
Fig. 37
Comparing the error rate with our OFSCNN and non-optimized CNN.
Fig. 38
Fig. 38
The elapsed time of different methods presented in this work.

Similar articles

References

    1. Cirrincione, G. et al. Transformer-based approach to melanoma detection. Sensors23, 5677 (2023). - PMC - PubMed
    1. Bhatt, H., Shah, V., Shah, K., Shah, R. & Shah, M. State-of-the-art machine learning techniques for melanoma skin cancer detection and classification: a comprehensive review. Intell. Med.3, 180–190 (2023).
    1. Patel, R. H., Foltz, E. A., Witkowski, A. & Ludzik, J. Analysis of artificial intelligence-based approaches applied to non-invasive imaging for early detection of melanoma: a systematic review. Cancers15, 4694 (2023). - PMC - PubMed
    1. Pham, T.-C., Luong, C.-M., Hoang, V.-D. & Doucet, A. Ai outperformed every dermatologist in dermoscopic melanoma diagnosis, using an optimized deep-cnn architecture with custom mini-batch logic and loss function. Sci. Rep.11, 17485 (2021). - PMC - PubMed
    1. Pereira, P. M. et al. Dermoscopic skin lesion image segmentation based on local binary pattern clustering: comparative study. Biomed. Signal Process. Control59, 101924 (2020).

MeSH terms