Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014:9034:903402.
doi: 10.1117/12.2043040.

An adaptive grid for graph-based segmentation in retinal OCT

Affiliations

An adaptive grid for graph-based segmentation in retinal OCT

Andrew Lang et al. Proc SPIE Int Soc Opt Eng. 2014.

Abstract

Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer's thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer's thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries.

Keywords: OCT; adaptive grid; classification; layer segmentation; retina.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Flowchart of our layer segmentation algorithm. Note that feature computation and boundary classification are done on ‘flattened’ images, which is undone to build the adaptive grid.
Figure 2
Figure 2
Images showing the construction of our adaptive grid. (a) Streamlines are generated between initial estimates of the outer retinal boundaries. (b) Estimates of each boundary as determined by using a regression model.
Figure 3
Figure 3
Examples of fitting the regression model at three separate locations in the retina (from three separate streamlines). Results are shown for each layer (top row) at the fovea center, (middle row) 0.5 mm from the fovea in the nasal direction to the fovea, and (bottom row) 1.5 mm from the fovea in the temporal superior direction. Each black dot represents the measurement from a separate subject (by manual segmentation), with total retina thickness (i.e. the total length of the streamline, in μm) on the x-axis, and the thickness of the layer with respect to the total distance from the ILM to the BM on the y-axis.
Figure 4
Figure 4
(a) The final grid overlaid on the retina, constructed after filling in the graph along the streamlines between the regression estimates (shown in blue). The vertices of the graph are located at the intersection of the lines. We denote ‘base nodes’ as those at the intersection of the regression (blue) lines. (b) When looking at the deformed grid as a 4-connected rectangular lattice, we can think of the deformation as ‘flattening’ the data to each boundary (significantly downsampled for visualization).
Figure 5
Figure 5
Images showing the value of the average absolute error (left) and standard deviation (right) as the parameters for smoothness (w) and for grid size (s) are changed.
Figure 6
Figure 6
Two B-scan images with overlaid segmentations from (a) the manual ground truth, (b) the voxel grid algorithm, and (c) the deformed grid algorithm. The results from a zoomed in region of the fovea are shown in (d). Images are from separate subjects and have been scaled 3× in the vertical direction.

Similar articles

Cited by

References

    1. Guedes V, Schuman JS, Hertzmark E, Wollstein G, Correnti A, Mancini R, Lederer D, Voskanian S, Velazquez L, Pakter HM, Pedut-Kloizman T, Fujimoto JG, Mattox C. Optical coherence tomography measurement of macular and nerve fiber layer thickness in normal and glaucomatous human eyes. Ophthalmology. 2003;110(1):177–189. - PMC - PubMed
    1. Keane PA, Patel PJ, Liakopoulos S, Heussen FM, Sadda SR, Tufail A. Evaluation of age-related macular degeneration with optical coherence tomography. Survey of Ophthalmology. 2012;57(5):389–414. - PubMed
    1. Jindahra P, Hedges TR, Mendoza-Santiesteban CE, Plant GT. Optical coherence tomography of the retina: applications in neurology. Curr Opin Neurol. 2010;23(1):16–23. - PubMed
    1. Hajee ME, March WF, Lazzaro DR, Wolintz AH, Shrier EM, Glazman S, Bodis-Wollner IG. Inner retinal layer thinning in Parkinson disease. Arch Ophthalmol. 2009;127(6):737–741. - PubMed
    1. Frohman EM, Fujimoto JG, Frohman TC, Calabresi PA, Cutter G, Balcer LJ. Optical coherence tomography: a window into the mechanisms of multiple sclerosis. Nat Clin Pract Neuro. 2008;4(12):664–675. - PMC - PubMed

LinkOut - more resources