Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Nov 28;5(2):100664.
doi: 10.1016/j.xops.2024.100664. eCollection 2025 Mar-Apr.

EyeLiner: A Deep Learning Pipeline for Longitudinal Image Registration Using Fundus Landmarks

Affiliations

EyeLiner: A Deep Learning Pipeline for Longitudinal Image Registration Using Fundus Landmarks

Yoga Advaith Veturi et al. Ophthalmol Sci. .

Abstract

Objective: Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, "EyeLiner," for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.

Design: EyeLiner registers a "moving" image to a "fixed" image using a DL-based keypoint matching algorithm.

Participants: We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).

Methods: Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.

Main outcome measures: We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.

Results: EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUCFIRE = 0.76, AUCCORIS = 0.83, AUCSIGF = 0.74).

Conclusions: Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.

Financial disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

Keywords: Artificial intelligence; Change detection; Deep learning; Flicker chronoscopy; Image registration.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Examples of transformations applied on an ocular image.
Figure 2
Figure 2
EyeLiner pipeline for fundus image registration. A, A fixed and moving image are processed by a deep learning–based segmentation algorithm to generate segmentation masks of the anatomical structures. B, Segmentation masks are passed to the SuperPoint and LightGlue deep learning algorithms which detect corresponding keypoints on the anatomical structures. C,D, A thin-plate spline registration algorithm is utilized to find the optimal alignment between images based on the keypoints. The lock symbol indicates that our models use pretrained frozen weights.
Figure 3
Figure 3
Image registrations obtained using our EyeLiner pipeline. A, Registrations from the Fundus Image REgistration (FIRE) data set for S, P, and A category images where S = large overlap between images (<75% overlap), P = small overlap between images (<75% overlap), and A = large overlap between images with anatomical change (observable in the vessel thickness). B, Registrations from the sequential images for glaucoma forecast (SIGF) data set for patients who transitioned from negative to positive glaucoma, and C, Registrations from our internal Colorado Ophthalmology Research Information System (CORIS) data set. The first 2 registrations correspond to patients having major and minor progression to glaucoma respectively. The third registration is a patient who is stable.

References

    1. Heijl A., Bengtsson B. Diagnosis of early glaucoma with flicker comparisons of serial disc photographs. Invest Ophthalmol Vis Sci. 1989;30:2376–2384. - PubMed
    1. O’Toole L., Coleman K., Franco O., et al. Exploring potential for automatic change alert for diagnosis of optic nerve change head changes in a diabetic population. Invest Ophthalmol Vis Sci. 2023;64:232. 232.
    1. Ramsey D.J., Sunness J.S., Malviya P., et al. Automated image alignment and segmentation to follow progression of geographic atrophy in age-related macular degeneration. Retina. 2014;34:1296–1307. - PubMed
    1. Hussain M.A., Govindaiah A., Souied E., et al. 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR) IEEE; Kitakyushu, Japan: 2018. Automated tracking and change detection for age-related macular degeneration progression using retinal fundus imaging; pp. 394–398.
    1. Avants B.B., Tustison N., Song G. Advanced normalization tools (ANTS) Insight. 2009;2:1–35.

LinkOut - more resources