Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Sep;29(9):1342-1348.
doi: 10.1089/tmj.2022.0405. Epub 2023 Feb 3.

Explainable Image Quality Assessments in Teledermatological Photography

Affiliations

Explainable Image Quality Assessments in Teledermatological Photography

Raluca Jalaboi et al. Telemed J E Health. 2023 Sep.

Abstract

Background and Objectives: Image quality is a crucial factor in the effectiveness and efficiency of teledermatological consultations. However, up to 50% of images sent by patients have quality issues, thus increasing the time to diagnosis and treatment. An automated, easily deployable, explainable method for assessing image quality is necessary to improve the current teledermatological consultation flow. We introduce ImageQX, a convolutional neural network for image quality assessment with a learning mechanism for identifying the most common poor image quality explanations: bad framing, bad lighting, blur, low resolution, and distance issues. Methods: ImageQX was trained on 26,635 photographs and validated on 9,874 photographs, each annotated with image quality labels and poor image quality explanations by up to 12 board-certified dermatologists. The photographic images were taken between 2017 and 2019 using a mobile skin disease tracking application accessible worldwide. Results: Our method achieves expert-level performance for both image quality assessment and poor image quality explanation. For image quality assessment, ImageQX obtains a macro F1-score of 0.73 ± 0.01, which places it within standard deviation of the pairwise inter-rater F1-score of 0.77 ± 0.07. For poor image quality explanations, our method obtains F1-scores of between 0.37 ± 0.01 and 0.70 ± 0.01, similar to the inter-rater pairwise F1-score of between 0.24 ± 0.15 and 0.83 ± 0.06. Moreover, with a size of only 15 MB, ImageQX is easily deployable on mobile devices. Conclusion: With an image quality detection performance similar to that of dermatologists, incorporating ImageQX into the teledermatology flow can enable a better, faster flow for remote consultations.

Keywords: artificial intelligence; deep learning; explainability; image quality; teledermatology; telemedicine.

PubMed Disclaimer

Conflict of interest statement

No competing financial interests exist.

Figures

Fig. 1.
Fig. 1.
ImageQX network architecture. To facilitate deployment on mobile devices, we use the lightweight EfficientNet-B0 architecture as a feature extractor. A linear block, composed of a linear layer, batch normalization, and a dropout layer, is used to parse these features before predicting poor image quality explanations, that is, bad framing, bad light, blurry, low resolution, and too far away. Another similar linear block parses the image features and then concatenates them with the poor image quality explanations to predict the image quality label.
Fig. 2.
Fig. 2.
Labeling protocol for the ImageQX training and validation dataset. Dermatologists start by assessing whether or not the image can be diagnosed. If the image can be assessed, they diagnose it using an ICD-10 code. Otherwise, if there is no visible skin or if there are no visible lesions in the picture, the dermatologists discard the image as no skin or healthy skin, respectively. Finally, if the image cannot be evaluated because of poor quality, they select one of the five investigated poor image quality explanations.
Fig. 3.
Fig. 3.
Illustration of poor image quality explanations that can be detected by ImageQX. (a) Bad framing: the image was not centered on the lesion. (b) Bad light: the lighting conditions in which the image was taken were too dark or too bright. (c) Blurry: the image is not focused on the lesion, masking out its details. (d) Low resolution: the image was taken with a low-resolution camera and few details can be discerned. (e) Too far away: few lesion details could be seen owing to the distance from the camera. Images courtesy of the authors.
Fig. 4.
Fig. 4.
Grad-CAM attention maps for the blurry test image introduced in Figure 3. The image was correctly classified as poor quality. (a) Original blurry image. (b) Grad-CAM attention map for bad light. (c) Grad-CAM attention map for blurry. (d) Grad-CAM attention map for low resolution. When predicting bad light, ImageQX focuses on a slightly shaded part of the arm, whereas for blurry it highlights the lesion and its surrounding area. The low-resolution prediction is based on the edges of the arm and the background. Image courtesy of the authors. Grad-CAM, gradient-based class activation map.

References

    1. Yeboah CB, Harvey N, Krishnan R, et al. . The impact of COVID-19 on teledermatology: A review. Dermatol Clin 2021;39(4):599–608. - PMC - PubMed
    1. Landow SM, Mateus A, Korgavkar K, et al. . Teledermatology: Key factors associated with reducing face-to-face dermatology visits. J Am Acad Dermatol 2014;71(3):570–576. - PubMed
    1. Haque W, Chandy R, Ahmadzada M, et al. . Teledermatology after COVID-19: Key challenges ahead. Dermatol Online J 2021;27(4):13030. - PubMed
    1. Pasquali P, Sonthalia S, Moreno-Ramirez D, et al. . Teledermatology and its current perspective. Indian Dermatol Online J 2020;11(1):12–20. - PMC - PubMed
    1. Vodrahalli K, Daneshjou R, Novoa RA, et al. . TrueImage: a machine learning algorithm to improve the quality of telehealth photos. In: BIOCOMPUTING 2021: Proceedings of the Pacific Symposium. World Scientific Publishing Company: Singapore; 2020; pp. 220–231. - PubMed

Publication types