Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Oct 16;55(1):161-177.
doi: 10.55730/1300-0144.5954. eCollection 2025.

Integrated convolutional neural network for skin cancer classification with hair and noise restoration

Affiliations

Integrated convolutional neural network for skin cancer classification with hair and noise restoration

Nidhi Bansal et al. Turk J Med Sci. .

Abstract

Background/aim: Skin lesions are commonly diagnosed and classified using dermoscopic images. There are many artifacts visible in dermoscopic images, including hair strands, noise, bubbles, blood vessels, poor illumination, and moles. These artifacts can obscure crucial information about lesions, which limits the ability to diagnose lesions automatically. This study investigated how hair and noise artifacts in lesion images affect classifier performance and how they can be removed to improve diagnostic accuracy.

Materials and methods: A synthetic dataset created using hair simulation and noise simulation was used in conjunction with the HAM10000 benchmark dataset. Moreover, integrated convolutional neural networks (CNNs) were proposed for removing hair artifacts using hair inpainting and classification of refined dehaired images, called integrated hair removal (IHR), and for removing noise artifacts using nonlocal mean denoising and classification of refined denoised images, called integrated noise removal (INR).

Results: Five deep learning models were used for the classification: ResNet50, DenseNet121, ResNet152, VGG16, and VGG19. The proposed IHR-DenseNet121, IHR-ResNet50, and IHR-ResNet152 achieved 2.3%, 1.78%, and 1.89% higher accuracy than DenseNet121, ResNet50, and ResNet152, respectively, in removing hairs. The proposed INR-DenseNet121, INR-ResNet50, and INR-VGG19 achieved 1.41%, 2.39%, and 18.4% higher accuracy than DenseNet121, ResNet50, and VGG19, respectively, in removing noise.

Conclusion: A significant proportion of pixels within lesion areas are influenced by hair and noise, resulting in reduced classification accuracy. However, the proposed CNNs based on IHR and INR exhibit notably improved performance when restoring pixels affected by hair and noise. The performance outcomes of this proposed approach surpass those of existing methods.

Keywords: Dermoscopic images; classification; convolutional neural network; image hair; image noise; image restoration.

PubMed Disclaimer

Conflict of interest statement

Conflict of interest: The authors declare that there are no known competing financial interests or personal relationships that could have influenced the work reported in this paper.

Figures

Figure 1
Figure 1
(a)–(d) present images from the Hair Dataset, while (e)–(h) present images from the Noise Dataset.
Figure 2
Figure 2
Proposed CNN for dermoscopic image classification.
Figure 3
Figure 3
Integration of CNN with inpainting for dermoscopic hair removal and classification.
Figure 4
Figure 4
Stages of the hair removal process for dermoscopic images.
Figure 5
Figure 5
Integration of CNN with denoising for dermoscopic noise removal and classification.
Figure 6
Figure 6
Comparison of improvement in training accuracy and loss after dehairing.
Figure 7
Figure 7
Comparison of improvement in training accuracy and loss after denoising.
Figure 8
Figure 8
Comparison of improvement in training and validation accuracy for GT, Hair Dataset, and proposed IHR model.
Figure 9
Figure 9
Comparison of improvement in training and validation loss for GT, Hair Dataset, and proposed IHR model.
Figure 10
Figure 10
Comparison of improvement in training and validation accuracy for GT, Noise Dataset, and proposed INR model.
Figure 11
Figure 11
Comparison of improvement in training and validation loss for GT, Noise Dataset, and proposed INR model.

Similar articles

References

    1. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–118. doi: 10.1038/nature21056. - DOI - PMC - PubMed
    1. Yang DD, Salciccioli JD, Marshall DC, Sheri A, Shalhoub J. Trends in malignant melanoma mortality in 31 countries from 1985 to 2015. British Journal of Dermatology. 2020;183(6):1056–1064. doi: 10.1111/bjd.19010. - DOI - PubMed
    1. Mahbod A, Schaefer G, Wang C, Ecker R, Ellinge I.Skin lesion classification using hybrid deep neural networks. ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing; Brighton, UK. pp. 1229–1233. - DOI
    1. Conforti C, Giuffrida R, Vezzoni R, Resende FSS, Meo N, et al. Dermoscopy and the experienced clinicians. International Journal of Dermatology. 2020;59(1):16–22. doi: 10.1111/ijd.14512. - DOI - PubMed
    1. Yu L, Chen H, Dou Q, Qin J, Heng P-A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE transactions on medical imaging. 2017;36(4):994–1004. doi: 10.1109/TMI.2016.2642839. - DOI - PubMed

LinkOut - more resources