Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Feb:92:103066.
doi: 10.1016/j.media.2023.103066. Epub 2023 Dec 20.

Placental vessel segmentation and registration in fetoscopy: Literature review and MICCAI FetReg2021 challenge findings

Affiliations
Review

Placental vessel segmentation and registration in fetoscopy: Literature review and MICCAI FetReg2021 challenge findings

Sophia Bano et al. Med Image Anal. 2024 Feb.

Abstract

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.

Keywords: Fetoscopy; Placental scene segmentation; Surgical data science; Video mosaicking.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig. 1
Fig. 1
Illustrations of Twin-to-Twin Transfusion Syndrome. (a) shows the fetoscopic laser photocoagulation procedure, where the field of view of the fetoscope is extremely narrow. (b) shows the types of anastomoses (i) A-V: arterio-venous, (ii) V-V: veno-venous, and (iii) A-A: arterio-arterial. In the placenta, conversely from body circulatory system, arteries carry deoxygenated blood (in blue), and veins carry oxygenated blood (in red).
Fig. 2
Fig. 2
Training dataset distribution: (a) and (b) segmentation classes and their overall distribution in the segmentation data.
Fig. 3
Fig. 3
Testing dataset distribution: (a) and (b) segmentation classes and their overall distribution n the segmentation data.
Fig. 4
Fig. 4
Representative images from training and test datasets along with the segmentation annotations (groundtruth). Each center ID is also indicated next to video name (I - UCLH, II - IGG) for visual comparison of variabilities between the two centers.
Fig. 5
Fig. 5
Representative frames from training and test datasets at every 2 seconds. These clips are unannotated and the length of each clip mentioned in Table 2. Center ID is also marked on each video sequence (I - UCLH, II - IGG) for visual comparison of the data from the two different centers.
Fig. 6
Fig. 6
Illustration of the N-frame SSIM evaluation metric from Bano et al. (2020a).
Fig. 7
Fig. 7
FetReg2021 timeline and challenge participation statistics.
Fig. 8
Fig. 8
FetReg2021 submission protocol illustrating the docker image verification protocol.
Fig. 9
Fig. 9
Graphical overview of the participants’ methodologies for Task 1 as described in Section 4 (Key: X - input frame; y - groundtruth; yˆ - prediction). AQ-ENIB (a) proposed an ensemble of DenseNet models with Test Time Augment (TTA). BioPolimi (b) combined ResNet50 features with a Histogram of Oriented Gradients (HoG) computed on X. RREB (c) proposed a multi-task U2Net for segmentation and multi-scale regression of HoG features (HoG0ˆ, HoG1ˆ, …) computed on y (HoG0, HoG1, …). GRECHID (d) used 3 SEResNeXt-UNet models individually trained on each class ensembled by thresholding, where pixelsHighConfidence are pixels predicted with high confidence and countthreshold is the empirical threshold. SANO (e) proposed a mean ensemble of Feature Pyramid Network (FPN) with ResNet152 backbone. OOF (f) used an EfficientNet UNet++, preprocessing images with contrast-limited adaptive histogram equalization (CLAHE) and median filter.
Fig. 10
Fig. 10
Qualitative comparison showing results for baseline model when trained on single center data and multi-center data. mIoU over each test video for the baseline model trained with data from one center (I - UCLH, II - IGG). Bar colors from left to right indicate Centre I, II and I+II results.
Fig. 11
Fig. 11
Method comparison showing boxplot for frame-level IoU for each team on each video. Bar colors from left to right indicate teams in alphabetical order.
Fig. 12
Fig. 12
Sample images from the K-Fold Cross-Validation (from Bano et al., 2021) along with the segmentation annotations (Groundtruth) and Baseline segmentation output (Prediction) for Video001, 002, 003, 004, 005, 006, 007, 008 and 009 videos. Background (black), vessel (red), tool (blue) and fetus (green) labels are shown.
Fig. 13
Fig. 13
Examples of failure cases from all methods. The image, the groundtruth, the video ID and the frame mIoU values (including background) for each sample are also reported.
Fig. 14
Fig. 14
Qualitative comparison of the 7 methods under analysis. Both baseline and RREB better generalize over the placental scene dataset. Baseline achieved better segmentation than RREB in (c), (d) and (e). OOF is the least performing as it failed to generalize, wrongly segmenting vessels and missing the fetus class. White markers on the input and groundtruth images indicate regions where observations can be drawn between the seven methods under comparison.
Fig. 15
Fig. 15
Qualitative comparison of the Baseline (Bano et al., 2020a) and SANO methods showing (first column) generated mosaics from the Baseline method, (2nd column) generated mosaics from the SANO method, and (3rd column) 5-frame SSIM per frame for both methods. Baseline performance is better in all videos except Video020.
Fig. 16
Fig. 16
Quantitative comparison of the Baseline (Bano et al., 2020a) and SANO methods using the N-frame SSIM metric.

References

    1. Alabi O., Bano S., Vasconcelos F., L. David A., Deprest J., Stoyanov D. Robust fetoscopic mosaicking from deep learned flow fields. Int. J. Comput. Assist. Radiol. Surg. 2022 - PMC - PubMed
    1. Almoussa N., Dutra B., Lampe B., Getreuer P., Wittman T., Salafia C., Vese L. Medical Imaging 2011: Image Processing, Vol. 7962. SPIE; 2011. Automated vasculature extraction from placenta images; p. 79621L.
    1. Bano S., Casella A., Vasconcelos F., Moccia S., Attilakos G., Wimalasundera R., David A.L., Paladini D., Deprest J., De Momi E., et al. 2021. FetReg: Placental vessel segmentation and registration in fetoscopy challenge dataset. arXiv preprint arXiv:2106.05923.
    1. Bano S., Vasconcelos F., David A.L., Deprest J., Stoyanov D. Placental vessel-guided hybrid framework for fetoscopic mosaicking. Comput. Methods Biomech. Biomed. Eng.: Imaging Visual. 2022:1–6.
    1. Bano S., Vasconcelos F., Shepherd L.M., Vander Poorten E., Vercauteren T., Ourselin S., David A.L., Deprest J., Stoyanov D. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2020. Deep placental vessel segmentation for fetoscopic mosaicking; pp. 763–773.