Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2019 Jul;20(7):938-947.
doi: 10.1016/S1470-2045(19)30333-X. Epub 2019 Jun 12.

Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study

Affiliations
Comparative Study

Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study

Philipp Tschandl et al. Lancet Oncol. 2019 Jul.

Abstract

Background: Whether machine-learning algorithms can diagnose all pigmented skin lesions as accurately as human experts is unclear. The aim of this study was to compare the diagnostic accuracy of state-of-the-art machine-learning algorithms with human readers for all clinically relevant types of benign and malignant pigmented skin lesions.

Methods: For this open, web-based, international, diagnostic study, human readers were asked to diagnose dermatoscopic images selected randomly in 30-image batches from a test set of 1511 images. The diagnoses from human readers were compared with those of 139 algorithms created by 77 machine-learning labs, who participated in the International Skin Imaging Collaboration 2018 challenge and received a training set of 10 015 images in advance. The ground truth of each lesion fell into one of seven predefined disease categories: intraepithelial carcinoma including actinic keratoses and Bowen's disease; basal cell carcinoma; benign keratinocytic lesions including solar lentigo, seborrheic keratosis and lichen planus-like keratosis; dermatofibroma; melanoma; melanocytic nevus; and vascular lesions. The two main outcomes were the differences in the number of correct specific diagnoses per batch between all human readers and the top three algorithms, and between human experts and the top three algorithms.

Findings: Between Aug 4, 2018, and Sept 30, 2018, 511 human readers from 63 countries had at least one attempt in the reader study. 283 (55·4%) of 511 human readers were board-certified dermatologists, 118 (23·1%) were dermatology residents, and 83 (16·2%) were general practitioners. When comparing all human readers with all machine-learning algorithms, the algorithms achieved a mean of 2·01 (95% CI 1·97 to 2·04; p<0·0001) more correct diagnoses (17·91 [SD 3·42] vs 19·92 [4·27]). 27 human experts with more than 10 years of experience achieved a mean of 18·78 (SD 3·15) correct answers, compared with 25·43 (1·95) correct answers for the top three machine algorithms (mean difference 6·65, 95% CI 6·06-7·25; p<0·0001). The difference between human experts and the top three algorithms was significantly lower for images in the test set that were collected from sources not included in the training set (human underperformance of 11·4%, 95% CI 9·9-12·9 vs 3·6%, 0·8-6·3; p<0·0001).

Interpretation: State-of-the-art machine-learning classifiers outperformed human experts in the diagnosis of pigmented skin lesions and should have a more important role in clinical practice. However, a possible limitation of these algorithms is their decreased performance for out-of-distribution images, which should be addressed in future research.

Funding: None.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Numbers of registered and participating users on the study platform
Figure 2:
Figure 2:. Mean differences in correct diagnoses of human experts versus the top three machine-learning algorithms in batches of 30 images
Data are mean (95% CI).
Figure 3:
Figure 3:. Mean difference between all human expert readers and all machine-learning algorithms for the number of correct diagnoses per batch
Error bars denote 95% CIs. Machine-learning groups were allowed up to three technically distinct test set submissions resulting in multiple entries for some groups. The performance of each algorithm vs humans is increased the further down the y axis they are listed.
Figure 3:
Figure 3:. Mean difference between all human expert readers and all machine-learning algorithms for the number of correct diagnoses per batch
Error bars denote 95% CIs. Machine-learning groups were allowed up to three technically distinct test set submissions resulting in multiple entries for some groups. The performance of each algorithm vs humans is increased the further down the y axis they are listed.
Figure 4:
Figure 4:. Receiver operating characteristic curves of the diagnostic performance for discrimination of malignant from benign pigmented skin lesions
Blue dots indicate single human sensitivities and specificities, the purple box indicates the mean, and the error bars around the mean indicate 95% CI.

Comment in

  • Machine versus man in skin cancer diagnosis.
    Massi D, Laurino M. Massi D, et al. Lancet Oncol. 2019 Jul;20(7):891-892. doi: 10.1016/S1470-2045(19)30391-2. Epub 2019 Jun 11. Lancet Oncol. 2019. PMID: 31201138 No abstract available.

Similar articles

Cited by

References

    1. Saphier J Die Dermatoskopie. Arch f Dermat 1921; 128: 1–19.
    1. Kittler H, Pehamberger H, Wolff K, Binder M. Diagnostic accuracy of dermoscopy. Lancet Oncol 2002; 3: 159–65. - PubMed
    1. Forsea AM, Tschandl P, Del Marmol V, et al. Factors driving the use of dermoscopy in Europe: a pan-European survey. Br J Dermatol 2016; 175: 1329–37. - PubMed
    1. Rosendahl C, Williams G, Eley D, et al. The impact of subspecialization and dermatoscopy use on accuracy of melanoma diagnosis among primary care doctors in Australia. J Am Acad Dermatol 2012; 67: 846–52. - PubMed
    1. Rogers HW, Weinstock MA, Feldman SR, Coldiron BM. Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the US population, 2012. JAMA Dermatol 2015; 151: 1081–86. - PubMed

Publication types