Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Aug 11;4(2):100380.
doi: 10.1016/j.xops.2023.100380. eCollection 2024 Mar-Apr.

Keratoconus Detection-based on Dynamic Corneal Deformation Videos Using Deep Learning

Affiliations

Keratoconus Detection-based on Dynamic Corneal Deformation Videos Using Deep Learning

Hazem Abdelmotaal et al. Ophthalmol Sci. .

Abstract

Objective: To assess the performance of convolutional neural networks (CNNs) for automated detection of keratoconus (KC) in standalone Scheimpflug-based dynamic corneal deformation videos.

Design: Retrospective cohort study.

Participants: We retrospectively analyzed datasets with records of 734 nonconsecutive, refractive surgery candidates, and patients with unilateral or bilateral KC.

Methods: We first developed a video preprocessing pipeline to translate dynamic corneal deformation videos into 3-dimensional pseudoimage representations and then trained a CNN to directly identify KC from pseudoimages. We calculated the model's KC probability score cut-off and evaluated the performance by subjective and objective accuracy metrics using 2 independent datasets.

Main outcome measures: Area under the receiver operating characteristics curve (AUC), accuracy, specificity, sensitivity, and KC probability score.

Results: The model accuracy on the test subset was 0.89 with AUC of 0.94. Based on the external validation dataset, the AUC and accuracy of the CNN model for detecting KC were 0.93 and 0.88, respectively.

Conclusions: Our deep learning-based approach was highly sensitive and specific in separating normal from keratoconic eyes using dynamic corneal deformation videos at levels that may prove useful in clinical practice.

Financial disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

Keywords: Artificial intelligence; Convolutional neural network; Deep learning; Keratoconus; Scheimpflug-based dynamic corneal deformation videos.

PubMed Disclaimer

Figures

Figure 2
Figure 2
Flow chart of the method of calculation of distance between each pixel on the corneal skeleton relative to the corresponding pixel in the reference segment. This process is repeated for each extracted corneal skeleton to get 140 numerical rows representing 140 video frames. AC, represent 3 reference segments used and (1, 2, 3) represent samples of the corneal skeleton at its original position, applanation, and maximum concavity consecutively. The arrows represent the distance calculated between each reference segment and the corneal skeleton at the sampled corneal position (in pixels). The highlighted arrows in B-2 are expected to represent similar values during applanation.
Figure 3
Figure 3
The 3 types of extracted 2-dimensional numerical arrays are represented as heatmaps to facilitate visualization. 1-a, 1-b, and 1-c represent the original reference segment map, the applanation reference segment map, and the maximum concavity reference segment map consecutively. These arrays were extracted from supplementary video N4. 2-a, 2-b, and 2-c represent the original reference segment map, the applanation reference segment map, and the maximum concavity reference segment map consecutively. These arrays were extracted from supplementary video KC4. Hot colors represent larger values (distance in pixels), while cool colors represent smaller values.
Figure 5
Figure 5
Schematic diagram showing the structure of the employed custom network created on top of DenseNet121 model after truncation of the last classifying layer: First and second fully connected layers (1024 and 512 nodes respectively); Batch Normalization layer; Dropout (0.5), GlobalAveragePooling2D; Batch Normalization; Dropout (0.5); final classifying fully connected layer (2 nodes with Softmax activation). Pseudoimages extracted from videos were cropped to represent the early 75 video frames and then resized to 224 × 224 before being fed into the model.
Figure 6
Figure 6
Epoch accuracy/loss during model training/validation (Dataset 2 training/validation subset with data augmentation). The confusion matrix shows the performance of the trained model on the test subset (Dataset 2 test subset).
Figure 7
Figure 7
A, Receiver operating characteristic (ROC) curve for binary (keratoconus vs. normal) classification task by the trained model on the test subset (Dataset 2 test subset) with an area under the curve (AUC) of 0.942. The ROC curve is marked by a red dot at the site closest to the perfect classification point. B, Plot showing probability score at the cut-off point and Youden Index.
Figure 8
Figure 8
A, B, Receiver operating characteristic (ROC) curves, detection error trade-off curves, and confusion matrices for binary (keratoconus vs. normal) classification task of the external validation subset (Dataset 1) by 3 Naive Bayes classifiers, trained on Dataset 2 stiffness parameter at first applanation (SP A1), Corvis biomechanical index (CBI), and the tomographic and biomechanical index (TBI), compared to the performance of the trained adopted DenseNet121 based model on the cropped resized external validation dataset (Dataset 1) pseudoimages.
Figure 10
Figure 10
A, Selected examples of normal group class activation maps (CAMs) showing the areas of the cornea and frame sequence that are most important for the model classification decision. All were correctly predicted with prediction probability entitled. B, Selected examples of keratoconus group CAMs showing the areas of the cornea and frame sequence that are most important for the model classification decision. All were correctly predicted with prediction probability entitled.

Similar articles

Cited by

References

    1. Wagner H., Barr J.T., Zadnik K. Collaborative longitudinal evaluation of keratoconus (CLEK) study: methods and findings to date. Cont Lens Anterior Eye. 2007;30:223–232. - PMC - PubMed
    1. Labiris G., Giarmoukakis A., Sideroudi H., et al. Impact of keratoconus, cross-linking, and cross-linking combined with photorefractive keratectomy on self-reported quality of life. Cornea. 2012;31:734–739. - PubMed
    1. Rabinowitz Y.S. Keratoconus. Surv Ophthalmol. 1998;42:297–319. - PubMed
    1. Ambrosio R., Jr., Belin M.W. Imaging of the cornea: topography vs tomography. J Refract Surg. 2010;26:847–849. - PubMed
    1. Roberts C.J., Dupps W.J., Jr. Biomechanics of corneal ectasia and biomechanical treatments. J Cataract Refract Surg. 2014;40:991–998. - PMC - PubMed

LinkOut - more resources