Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Feb 17;11(4):716.
doi: 10.3390/cells11040716.

Automatic Colorectal Cancer Screening Using Deep Learning in Spatial Light Interference Microscopy Data

Affiliations

Automatic Colorectal Cancer Screening Using Deep Learning in Spatial Light Interference Microscopy Data

Jingfang K Zhang et al. Cells. .

Abstract

The surgical pathology workflow currently adopted by clinics uses staining to reveal tissue architecture within thin sections. A trained pathologist then conducts a visual examination of these slices and, since the investigation is based on an empirical assessment, a certain amount of subjectivity is unavoidable. Furthermore, the reliance on external contrast agents such as hematoxylin and eosin (H&E), albeit being well-established methods, makes it difficult to standardize color balance, staining strength, and imaging conditions, hindering automated computational analysis. In response to these challenges, we applied spatial light interference microscopy (SLIM), a label-free method that generates contrast based on intrinsic tissue refractive index signatures. Thus, we reduce human bias and make imaging data comparable across instruments and clinics. We applied a mask R-CNN deep learning algorithm to the SLIM data to achieve an automated colorectal cancer screening procedure, i.e., classifying normal vs. cancerous specimens. Our results, obtained on a tissue microarray consisting of specimens from 132 patients, resulted in 91% accuracy for gland detection, 99.71% accuracy in gland-level classification, and 97% accuracy in core-level classification. A SLIM tissue scanner accompanied by an application-specific deep learning algorithm may become a valuable clinical tool, enabling faster and more accurate assessments by pathologists.

Keywords: automated colorectal cancer screening; deep learning; label-free; mask R-CNN; spatial light interference microscopy.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The SLIM tissue scanner setup. The SLIM system was implemented as add-on to an existing phase contrast microscope. Pol, polarizer; SLM, spatial light modulator. The four independent frames corresponding to the four phase shifts imparted by the SLM are shown for a tissue sample.
Figure 2
Figure 2
The mask R-CNN network architecture. The mask R-CNN (regional convolutional neural network) framework contains two stages: scanning images and generating regional proposals for possible objects; and classifying the proposals and generating bounding boxes and pixel-wise masks. This specific network adopts the backbone of ResNet101 plus FPN for feature extraction. The RPN (region proposal network) scans over backbone feature maps, which allows extracted features to be reused, and removes duplicate calculations. The RPN outputs two results for each anchor: anchor class (foreground or background: foreground implies the potential existence of an object) and bounding box refinement (the foreground anchor box, with its location and size, is refined to fit the object). The final proposals are passed to the next stage. At this stage, two outputs are generated for each ROI as proposed by the RPN: class (for objects) and bounding box refinement. An ROI pooling algorithm crops a piece of area from a feature map and resizes it to a fixed size, which enables the functionality of classifiers. From this stage, a parallel branch of two fully convoluted layers is added that generates masks for the positive regions that are selected by the ROI classifier. The other branch of fully connected layers takes the outputs of the ROI pooling and outputs two values: a class label and a bounding box prediction for each object.
Figure 3
Figure 3
Examples of segmentation and classification. (a) Images of the 32 testing cores. (b) A cancer core. (c) The prediction of gland detection and classification of the core in (b). (d) A zoomed-in image of the cancer gland boxed in (c). (e) A normal core. (f) The prediction of gland detection and classification. (g) A zoomed-in image of the normal gland boxed in (f). The red color represents cancer and the green color normal glands.
Figure 4
Figure 4
Examples of gland detection errors and additional positive detections. (a) Ground truth (manual) segmentation of cancer glands. (b) Network predictions showing regions in the dashed boxes that were missed. (c,d) A similar illustration as in (a,b). (e) Ground truth (manual) segmentation of normal glands. (f) Network predictions showing regions in the dashed boxes that were additional true positives. (g,h) A similar illustration as in (e,f). Note that all errors and additions occurred at the boundaries of the cell cores.
Figure 5
Figure 5
The performance of classification, detection, and diagnosis on a test dataset. (a) A confusion matrix which shows that 95 instances of the detected cancer glands were correctly classified, while 1 was wrongly classified as normal. All 248 instances of the detected normal glands were perfectly classified. (b) A confusion matrix which shows that 96 cancer glands were detected, with 95 being correctly classified and 1 wrongly classified; 20 cancer glands were missed. A total of 248 normal glands were detected and correctly classified, while 3 normal glands were missed. (c) A confusion matrix which shows that all 14 cancer cores were correctly diagnosed as cancer; 17 out of the 18 normal images were correctly diagnosed, while 1 normal core was wrongly diagnosed as cancer. (d) Gland detection performance at three different detection confidence scores: 90%, 80%, and 70%, as indicated. The network can capture 96%, 98% and 99% of the annotated glands when its capturing confidence score is set respectively at 90%, 80%, and 99%. Blue dots indicate that the 90% confidence score exerts a highest filtering threshold and thus captures the least amount of glands. The orange dots indicate that the 80% confidence score exerts a lower filtering threshold and thus captures a bit more glands. The green dots indicate that the 70% confidence score exerts the lowest filtering threshold and thus captures the highest amount of glands among the three filters.
Figure 6
Figure 6
Gland capturing performances at three training epochs. The network’s gland classification performance is shown at three different training epochs, the 50th, 100th, and 390th, as indicated. The AUC (area under the ROC curve) is 0.87, 0.90, and 0.91, respectively.

References

    1. Muto T., Bussey H., Morson B. The evolution of cancer of the colon and rectum. Cancer. 1975;36:2251–2270. doi: 10.1002/cncr.2820360944. - DOI - PubMed
    1. Howlader N., Krapcho M.N.A., Garshell J., Miller D., Altekruse S.F., Kosary C.L., Yu M., Ruhl J., Tatalovich Z., Mariotto A. SEER Cancer Statistics Review, 1975–2011. National Cancer Institute; Bethesda, MD, USA: 2014.
    1. Klabunde C.N., Joseph D.A., King J.B., White A., Plescia M. Vital signs: Colorectal cancer screening test use—United States, 2012. MMWR Morb. Mortal. Wkly. Rep. 2013;62:881. - PMC - PubMed
    1. Colorectal Cancer Facts & Figures 2020–2022. American Cancer Society; Atlanta, GA, USA: 2020.
    1. Bibbins-Domingo K., Grossman D.C., Curry S.J., Davidson K.W., Epling J.W., García F.A., Gillman M.W., Harper D.M., Kemper A.R., Krist A.H., et al. Screening for colorectal cancer: US Preventive Services Task Force recommendation statement. JAMA. 2016;315:2564–2575. - PubMed

Publication types