Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 2.
doi: 10.1007/s11548-025-03472-4. Online ahead of print.

Watch and learn: leveraging expert knowledge and language for surgical video understanding

Affiliations

Watch and learn: leveraging expert knowledge and language for surgical video understanding

David Gastager et al. Int J Comput Assist Radiol Surg. .

Abstract

Purpose: Automated surgical workflow analysis is a common yet challenging task with diverse applications in surgical education, research, and clinical decision-making. Although videos are commonly collected during surgical interventions, the lack of annotated datasets hinders the development of accurate and comprehensive workflow analysis solutions. We introduce a novel approach for addressing the sparsity and heterogeneity of annotated training data inspired by the human learning procedure of watching experts and understanding their explanations.

Methods: Our method leverages a video-language model trained on alignment, denoising, and generative tasks to learn short-term spatio-temporal and multimodal representations. A task-specific temporal model is then used to capture relationships across entire videos. To achieve comprehensive video-language understanding in the surgical domain, we introduce a data collection and filtering strategy to construct a large-scale pretraining dataset from educational YouTube videos. We then utilize parameter-efficient fine-tuning by projecting downstream task annotations from publicly available surgical datasets into the language domain.

Results: Extensive experiments in two surgical domains demonstrate the effectiveness of our approach, with performance improvements of up to 7% in phase segmentation tasks, 5% in zero-shot phase segmentation, and comparable capabilities to fully supervised models in few-shot settings. Harnessing our model's capabilities for long-range temporal localization and text generation, we present the first comprehensive solution for dense video captioning (DVC) of surgical videos, addressing this task despite the absence of existing DVC datasets in the surgical domain.

Conclusion: We introduce a novel approach to surgical workflow understanding that leverages video-language pretraining, large-scale video pretraining, and optimized fine-tuning. Our method improves performance over state-of-the-art techniques and enables new downstream tasks for surgical video understanding.

Keywords: Dense video captioning; Multi-model video understanding; Surgical workflow recognition; Vision-language models.

PubMed Disclaimer

Conflict of interest statement

Declarations. Conflict of interest: David Gastager was supported by Carl ZEISS AG and affiliated with the Technical University of Munich (TUM). Ghazal Ghazaei is employed by Carl ZEISS AG and collaborates with TUM and TU Darmstadt. Constantin Patsch is supported by TUM. Ethical approval: This article does not contain any studies with human participants or animals performed by any of the authors.

Similar articles

References

    1. Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL-HLT
    1. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. NeurIPS
    1. Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I (2021) Learning transferable visual models from natural language supervision. ICML
    1. Zellers R, Lu J, Lu X, Yu Y, Zhao Y, Salehi M, Kusupati A, Hessel J, Farhadi A, Choi Y (2022) Merlot reserve: Neural script knowledge through vision and language and sound. In: CVPR. https://doi.org/10.1109/CVPR52688.2022.01589
    1. Liu J, Chen S, He X, Guo L, Zhu X, Wang W, Tang J (2024) Valor: vision-audio-language omni-perception pretraining model and dataset. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2024.3479776 - DOI - PubMed - PMC

LinkOut - more resources