Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jul 8;10(8):3815-3832.
doi: 10.1364/BOE.10.003815. eCollection 2019 Aug 1.

RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images

Affiliations

RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images

David Cunefare et al. Biomed Opt Express. .

Abstract

Quantification of the human rod and cone photoreceptor mosaic in adaptive optics scanning light ophthalmoscope (AOSLO) images is useful for the study of various retinal pathologies. Subjective and time-consuming manual grading has remained the gold standard for evaluating these images, with no well validated automatic methods for detecting individual rods having been developed. We present a novel deep learning based automatic method, called the rod and cone CNN (RAC-CNN), for detecting and classifying rods and cones in multimodal AOSLO images. We test our method on images from healthy subjects as well as subjects with achromatopsia over a range of retinal eccentricities. We show that our method is on par with human grading for detecting rods and cones.

PubMed Disclaimer

Conflict of interest statement

The authors declare that there are no conflicts of interest related to this article.

Figures

Fig. 1.
Fig. 1.
Rod and cone photoreceptor visualization on AOSLO. (a) Confocal AOSLO image at 7° from the fovea in a normal subject. (b) Co-registered non-confocal split detector AOSLO image from the same location as (a). (c) Confocal AOSLO image at 3° from the fovea in a subject with ACHM. (d) Simultaneously captured split detector AOSLO image from the same location as (c). Cone photoreceptor examples are shown with magenta arrows, and rod photoreceptor examples are shown with yellow arrows. Scale bars: 10 μm.
Fig. 2.
Fig. 2.
Outline of the CNN AOSLO rod and cone detection algorithm.
Fig. 3.
Fig. 3.
Creating label and weight maps from AOSLO image pairs. (a) Confocal AOSLO image. (b) Co-registered non-confocal split detector AOSLO image from the same location. (c-d) Manually marked rod positions shown in yellow and cone positions shown in magenta on the confocal image shown in (a) and on the split detector image shown in (b). (e) Label map generated from the markings in (c-d). (f) Weight map corresponding to the label map in (e).
Fig. 4.
Fig. 4.
The rod and cone CNN (RAC-CNN) architecture, which consists of the following layers: convolutional (Conv(F,G,N) where F and G are the kernel sizes in the first two dimensions and N is the number of kernels), batch normalization (BatchNorm), ReLU, max pooling (MaxPool(P,Q) where P and Q are the window dimensions), unpooling, concatenation, and soft-max. The same structure is used in the split detector AOSLO and confocal AOSLO paths.
Fig. 5.
Fig. 5.
Detection of rods and cones in confocal and split detector AOSLO image pairs. (a) Confocal AOSLO image. (b) Co-registered non-confocal split detector AOSLO image from the same location. (c) Rod probability map and (d) cone probability map generated from (a) and (b) using the trained RAC-CNN. (e) Extended maxima of (c). (f) Extended maxima of (d). (g-h) Detected rods marked in yellow and cones marked in magenta on the confocal image shown in (a) and on the split detector image shown in (b).
Fig. 6.
Fig. 6.
Performance of the RAC-CNN method on healthy images. Confocal AOSLO images from different subjects are shown on the top row, and the co-registered split detector AOSLO images are shown in the row second from the top. Rod detection results for the RAC-CNN method with respect to the first set of manual markings are shown on the second row from the bottom, and cone detection results are shown on the bottom row. Green points denote true positives, blue denotes false negatives, and gold denotes false positives. Dice’s coefficients for the rods and cones are 0.98 and 1 in (a), 0.94 and 0.99 in (b), and 0.91 and 0.95 in (c), respectively.
Fig. 7.
Fig. 7.
Performance of the RAC-CNN method on ACHM images. Confocal AOSLO images from different subjects are shown on the top row, and the simultaneously captured split detector AOSLO images are shown in the row second from the top. Rod detection results for the RAC-CNN method with respect to the first set of manual markings are shown on the second row from the bottom, and cone detection results are shown on the bottom row. Green points denote true positives, blue denotes false negatives, and gold denotes false positives. Dice’s coefficients for the rods and cones are 0.93 and 0.98 in (a), 0.94 and 0.93 in (b), and 0.89 and 0.88 in (c), respectively.
Fig. 8.
Fig. 8.
Performance of the automated algorithms for cone detection in a healthy (top) and ACHM (bottom) image pair. Simultaneously captured confocal and split detector images are shown in the two left columns. Performance with respect to manual cone markings for the RAC-CNN and our previous LF-DM-CNN [31] methods are shown in the right two columns and displayed on the split detector images. Only cones are included in this figure as LF-DM-CNN cannot detect rods. Green points denote true positives, blue denotes false negatives, and gold denotes false positives. Dice’s coefficients are 0.99 for both methods for the healthy image pair, and 0.92 for both methods for the ACHM image pair.

References

    1. Roorda A., Duncan J. L., “Adaptive optics ophthalmoscopy,” Annu. Rev. Vis. Sci. 1(1), 19–50 (2015). 10.1146/annurev-vision-082114-035357 - DOI - PMC - PubMed
    1. Burns S. A., Elsner A. E., Sapoznik K. A., Warner R. L., Gast T. J., “Adaptive optics imaging of the human retina,” Prog. Retinal Eye Res. 68, 1–30 (2019). 10.1016/j.preteyeres.2018.08.002 - DOI - PMC - PubMed
    1. Roorda A., Williams D. R., “The arrangement of the three cone classes in the living human eye,” Nature 397(6719), 520–522 (1999). 10.1038/17383 - DOI - PubMed
    1. Kocaoglu O. P., Lee S., Jonnal R. S., Wang Q., Herde A. E., Derby J. C., Gao W., Miller D. T., “Imaging cone photoreceptors in three dimensions and in time using ultrahigh resolution optical coherence tomography with adaptive optics,” Biomed. Opt. Express 2(4), 748–763 (2011). 10.1364/BOE.2.000748 - DOI - PMC - PubMed
    1. Lombardo M., Serrao S., Lombardo G., “Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images,” PLoS One 9(9), e107402 (2014). 10.1371/journal.pone.0107402 - DOI - PMC - PubMed

LinkOut - more resources