Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2024 Sep 9:arXiv:2409.05666v1.

Robust Real-time Segmentation of Bio-Morphological Features in Human Cherenkov Imaging during Radiotherapy via Deep Learning

Affiliations

Robust Real-time Segmentation of Bio-Morphological Features in Human Cherenkov Imaging during Radiotherapy via Deep Learning

Shiru Wang et al. ArXiv. .

Update in

Abstract

Cherenkov imaging enables real-time visualization of megavoltage X-ray or electron beam delivery to the patient during Radiation Therapy (RT). Bio-morphological features, such as vasculature, seen in these images are patient-specific signatures that can be used for verification of positioning and motion management that are essential to precise RT treatment. However until now, no concerted analysis of this biological feature-based tracking was utilized because of the slow speed and accuracy of conventional image processing for feature segmentation. This study demonstrated the first deep learning framework for such an application, achieving video frame rate processing. To address the challenge of limited annotation of these features in Cherenkov images, a transfer learning strategy was applied. A fundus photography dataset including 20,529 patch retina images with ground-truth vessel annotation was used to pre-train a ResNet segmentation framework. Subsequently, a small Cherenkov dataset (1,483 images from 212 treatment fractions of 19 breast cancer patients) with known annotated vasculature masks was used to fine-tune the model for accurate segmentation prediction. This deep learning framework achieved consistent and rapid segmentation of Cherenkov-imaged bio-morphological features on another 19 patients, including subcutaneous veins, scars, and pigmented skin. Average segmentation by the model achieved Dice score of 0.85 and required less than 0.7 milliseconds processing time per instance. The model demonstrated outstanding consistency against input image variances and speed compared to conventional manual segmentation methods, laying the foundation for online segmentation in real-time monitoring in a prospective setting.

Keywords: Cherenkov imaging; Image segmentation; Morphological feature; Radiotherapy; Transfer learning.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Cherenkov imaging during radiotherapy. (a) Cherenkov image from breast RT. The top panel shows an example Cherenkov image of a breast cancer patient undergoing radiotherapy, with a pseudo-colormap for enhanced visualization. The bottom left panel displays an overlay of Cherenkov images from two-day treatment fractions exhibiting the setup variations between them, where magenta represents the variation in the later fraction and green represents the residual in the former fraction. The bottom right panel displays the deformation between the two fractions (fx) that is quantified by registration of bio-morphological features in the yellow box region of Cherenkov images. (b) Setup of Cherenkov imaging system during RT, providing real-time Cherenkov imaging. (c) Proposed online workflow based on the segmentation of bio-morphological features from Cherenkov imaging for real-time monitoring and verification. The focus of this work is the segmentation of bio-morphological features shown in the middle step of the workflow, enabling the ultimate monitoring and validating the patient positioning in real time.
Fig. 2.
Fig. 2.
Model training methodology. (a) Retina photograph example in FIVES dataset with corresponding vasculature mask. In the training stage, each retina image was cropped into 36 square patches with size of 224 pixels by 224 pixels or data augmentation. (b) Paradigm of transfer learning. The deep learning model is first trained in a supervised manner using the FIVES retinal dataset. Then pre-trained weights were fine-tuned and used for segmentation of bio-morphological features in Cherenkov images.
Fig. 3.
Fig. 3.
Architecture of the SegResNet2D. The input data are N by 224 by 224 retina or Cherenkov grayscale images, where N is the total number of training dataset. The outputs are binary masks of segmented vasculature with the same size of the input images. Detailed steps involved in each layer of architecture are described in the dashed line box below. ‘x2’ and ‘x4’ indicate repeating the described steps by looping through from the first to the last step, two or four times, respectively.
Fig. 4.
Fig. 4.
Segmentation visualization of Cherenkov-imaged bio-morphological features on 10 representative breast cancer patients. (a) - (j) Segmented bio-morphological features by fine-tuned SegResNet model with enhanced edges transparently overlaid on Cherenkov images for 10 breast patients. Segmented bio-morphological features are mainly subcutaneous veins in the breast surface but partly include other features like scars and nipples. Red arrows in panel (b), (e), (f), and (i) indicate false inclusion of segmentation with nipple and scars, while white arrows in panel (c), (g), (h), and (j) indicate the well-performed predictions on accurately exclusively identifying vasculature.
Fig. 5.
Fig. 5.
Assessment of model robustness. (a) Qualitative segmentation results of an example Cherenkov image with variation of performing rotation and adding noise. (b) Similarity between original prediction and predictions with variations among all Cherenkov testing images.
Fig. 6.
Fig. 6.
Deep learning versus manual segmentation. (a) Visualization of prediction made by DL and three repeated (red, green, blue respectively represents three repetition) manual segmentation by one experienced observer on two Cherenkov cases. (b) Segmentation consistency among three repetitions by DL compared to 5 observers on two cases respectively.
Fig. 7.
Fig. 7.
Model segmentation performance on raw Cherenkov video frames with different numbers of cumulative frames. Each sub-cumulative frame segmentation was compared to the whole time-series cumulation segmentation using a Dice score. (a) Segmentation on a free breathing (FB) patient. (b) Segmentation on a deep inspiration breath hold (DIBH) patient. (c) The blue curve at the top represents the FB patient, while the orange curve at the bottom represents the DIBH patient. The shaded areas in the plot represent the standard errors of temporal sub-cumulative frames.

References

    1. Baskar R., Lee K. A., Yeo R., and Yeoh K.-W., “Cancer and Radiation Therapy: Current Advances and Future Directions,” Int J Med Sci, vol. 9, no. 3, pp. 193–199, Feb. 2012, doi: 10.7150/ijms.3635. - DOI - PMC - PubMed
    1. Jaffray D. A., “Image-guided radiotherapy: from current concept to future perspectives,” Nat Rev Clin Oncol, vol. 9, no. 12, pp. 688–699, Dec. 2012, doi: 10.1038/nrclinonc.2012.194. - DOI - PubMed
    1. Bogdanich W., “Radiation Offers New Cures, and Ways to Do Harm,” The New York Times, Jan. 23, 2010. Accessed: Nov. 13, 2023. [Online]. Available: https://www.nytimes.com/2010/01/24/health/24radiation.html
    1. Jarvis L. A. et al. , “Initial Clinical Experience of Cherenkov Imaging in External Beam Radiation Therapy Identifies Opportunities to Improve Treatment Delivery,” International Journal of Radiation Oncology*Biology*Physics, vol. 109, no. 5, pp. 1627–1637, Apr. 2021, doi: 10.1016/j.ijrobp.2020.11.013. - DOI - PMC - PubMed
    1. Jarvis L. A. et al. , “Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time,” International Journal of Radiation Oncology*Biology*Physics, vol. 89, no. 3, pp. 615–622, Jul. 2014, doi: 10.1016/j.ijrobp.2014.01.046. - DOI - PubMed

Publication types

LinkOut - more resources