Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jan 5;14(1):676.
doi: 10.1038/s41598-023-49721-x.

DDCNN-F: double decker convolutional neural network 'F' feature fusion as a medical image classification framework

Affiliations

DDCNN-F: double decker convolutional neural network 'F' feature fusion as a medical image classification framework

Nirmala Veeramani et al. Sci Rep. .

Abstract

Melanoma is a severe skin cancer that involves abnormal cell development. This study aims to provide a new feature fusion framework for melanoma classification that includes a novel 'F' Flag feature for early detection. This novel 'F' indicator efficiently distinguishes benign skin lesions from malignant ones known as melanoma. The article proposes an architecture that is built in a Double Decker Convolutional Neural Network called DDCNN future fusion. The network's deck one, known as a Convolutional Neural Network (CNN), finds difficult-to-classify hairy images using a confidence factor termed the intra-class variance score. These hirsute image samples are combined to form a Baseline Separated Channel (BSC). By eliminating hair and using data augmentation techniques, the BSC is ready for analysis. The network's second deck trains the pre-processed BSC and generates bottleneck features. The bottleneck features are merged with features generated from the ABCDE clinical bio indicators to promote classification accuracy. Different types of classifiers are fed to the resulting hybrid fused features with the novel 'F' Flag feature. The proposed system was trained using the ISIC 2019 and ISIC 2020 datasets to assess its performance. The empirical findings expose that the DDCNN feature fusion strategy for exposing malignant melanoma achieved a specificity of 98.4%, accuracy of 93.75%, precision of 98.56%, and Area Under Curve (AUC) value of 0.98. This study proposes a novel approach that can accurately identify and diagnose fatal skin cancer and outperform other state-of-the-art techniques, which is attributed to the DDCNN 'F' Feature fusion framework. Also, this research ascertained improvements in several classifiers when utilising the 'F' indicator, resulting in the highest specificity of + 7.34%.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Architectural diagram of the proposed DDCNN system.
Algorithm 1
Algorithm 1
DDCNN () Sequential workflow representation.
None
Algorithm 2
Do_BSC() Pseudocode representation of the baseline separated channel.
None
Algorithm 3
Hair_removal() process.
Figure 2
Figure 2
Hair removal process from different sample images of hair from ISIC 2019 and ISIC 2020 datasets.
None
Algorithm 4
Extraction of ABCDE features.
None
Algorithm 5
Feature fusion framework.
Figure 3
Figure 3
Extraction of Asymmetry, Border, Colour, Diameter and Evolution characteristics.
Figure 4
Figure 4
(a) Training Accuracy vs Epochs; (b) Training Loss vs Epochs.
Figure 5
Figure 5
BSC results concerning the confidence factor and number of image samples.
Figure 6
Figure 6
Average error ratio vs. confidence factor ratio.
Figure 7
Figure 7
Hairy image samples segregated at the baseline separation channel (BSC).
Figure 8
Figure 8
Augmented samples of the skin lesion images.
Figure 9
Figure 9
(a) Input image; (b) Processed scale image; (c) Perimeter and feature calculated image; (d) Masked image; (e) Resulting image.
Figure 10
Figure 10
Feature maps of the benign skin lesion input images.
Figure 11
Figure 11
(a) Asymmetry, (b) border, (c) colour, and (d) diameter box plots.
Figure 12
Figure 12
(a) ROC plot of the proposed framework; (b) Confusion matrix of two classes: benign and malignant.

Similar articles

Cited by

References

    1. Hagyousif YA, Sharaf BM, Zenati RA, El-Huneidi W, Bustanji Y, Abu-Gharbieh E, Semreen MH. Skin cancer metabolic profile assessed by different analytical platforms. Int. J. Mol. Sci. 2023;24(2):1604. doi: 10.3390/ijms24021604. - DOI - PMC - PubMed
    1. Singh, S. K., Banerjee, S., Chakraborty, A., & Bandyopadhyay, A. (2023). Classification of Melanoma Skin Cancer Using Inception-ResNet. In Frontiers of ICT in Healthcare: Proceedings of EAIT 2022 (pp. 65–74). Singapore: Springer Nature Singapore.
    1. Nambisan AK, Maurya A, Lama N, Phan T, Patel G, Miller K, Stoecker WV. Improving automatic melanoma diagnosis using deep learning-based segmentation of irregular networks. Cancers. 2023;15(4):1259. doi: 10.3390/cancers15041259. - DOI - PMC - PubMed
    1. Poornimaa JJ, Anithaa J, Henrya AP, Hemantha DJ. Melanoma classification using machine learning techniques. Design Stud. Intell. Eng. Proc. DSIE. 2023;2022(365):178.
    1. Ma J, Chen J, Ng M, Huang R, Li Y, Li C, Martel AL. Loss odyssey in medical image segmentation. Med. Image Anal. 2021;71:102035. doi: 10.1016/j.media.2021.102035. - DOI - PubMed