Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011 Oct 21;52(11):8316-22.
doi: 10.1167/iovs.10-7012.

Computerized macular pathology diagnosis in spectral domain optical coherence tomography scans based on multiscale texture and shape features

Affiliations

Computerized macular pathology diagnosis in spectral domain optical coherence tomography scans based on multiscale texture and shape features

Yu-Ying Liu et al. Invest Ophthalmol Vis Sci. .

Abstract

Purpose: To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images.

Methods: A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance.

Results: The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively.

Conclusions: The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Stages of our approach. Morphologic op., morphologic operations; LBP, local binary patterns; PCA, principle component analysis; SVM, support vector machine.
Figure 2.
Figure 2.
Venn diagram of labeling agreement among the three experts on all macular categories in database A. The actual scan numbers and the percentage are both shown. E1, E2, E3, the three experts.
Figure 3.
Figure 3.
The number of cases and representative examples in database A where all three experts, two experts, or only one expert gave “positive” labels for the presence of normal macula and each pathology. Note that images in the first two columns were defined as positive while the ones in the last column were regarded as negative in our majority-opinion–based ground truth. The images without total agreement usually contain early pathologies that were subtle and occupied small areas.
Figure 4.
Figure 4.
Examples of the aligned retinal images and their Canny edge maps derived under different edge-detection thresholds t for each macular category. The smaller the value of t, the more edges are retained.
Figure 5.
Figure 5.
ROC curve of one run of 10-fold cross-validation on all images in dataset A. The best feature setting for each macular pathology was used. Feature setting: TS (t = 0.4), S (t = 0.4), TS (t = 0.4), and TS (t = 0.2) for NM, MH, ME, and AMD, respectively.
Figure 6.
Figure 6.
AUC results with respect to a varied training set size from dataset A. For each training fold, 10%, 20%, …, 100% of the positive and negative subjects were sampled and used for training, whereas the testing fold was unchanged. Feature setting: TS (t = 0.4), S (t = 0.4), TS (t = 0.4), and TS (t = 0.2) for NM, MH, ME, and AMD, respectively.
Figure 7.
Figure 7.
ROC curve of testing on dataset B, based on the pathology classifiers trained using images from dataset A. The ground truth for this experiment was defined by the consensus of the two experts (experts 1 and 2) on both datasets. The statistics of pathology distribution are shown in Table 6. The feature and parameter setting for each pathology was determined using dataset A only. Feature setting: TS (t = 0.4), S (t = 0.4), TS (t = 0.4), and TS (t = 0.2) for NM, MH, ME, and AMD, respectively.

References

    1. Schuman JS. Spectral domain optical coherence tomography for glaucoma. Trans Am Ophthalmol Soc. 2008;106:426–458 - PMC - PubMed
    1. Liu Y-Y, Chen M, Ishikawa H, Wollstein G, Schuman J, Rehg JM. Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid with local binary patterns. International Conference on Medical Image Computing and Computer Assisted Intervention. 2010;6361:1–9 - PMC - PubMed
    1. Canny J. A Computational Approach To Edge Detection. IEEE Trans Pattern Analysis and Machine Intelligence. 1986;8:679–698 - PubMed
    1. Wu J, Rehg JM. Where Am I: Place Instance and Category Recognition Using Spatial PACT. Presented at IEEE Computer Vision and Pattern Recognition 2008, Anchorage, Alaska, June 2008; New York: IEEE/Wiley; 2008;1–8
    1. Ojala T, Pietikäinen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell. 2002;24:971–987

Publication types

MeSH terms