Watch and learn: leveraging expert knowledge and language for surgical video understanding
- PMID: 40601123
- DOI: 10.1007/s11548-025-03472-4
Watch and learn: leveraging expert knowledge and language for surgical video understanding
Abstract
Purpose: Automated surgical workflow analysis is a common yet challenging task with diverse applications in surgical education, research, and clinical decision-making. Although videos are commonly collected during surgical interventions, the lack of annotated datasets hinders the development of accurate and comprehensive workflow analysis solutions. We introduce a novel approach for addressing the sparsity and heterogeneity of annotated training data inspired by the human learning procedure of watching experts and understanding their explanations.
Methods: Our method leverages a video-language model trained on alignment, denoising, and generative tasks to learn short-term spatio-temporal and multimodal representations. A task-specific temporal model is then used to capture relationships across entire videos. To achieve comprehensive video-language understanding in the surgical domain, we introduce a data collection and filtering strategy to construct a large-scale pretraining dataset from educational YouTube videos. We then utilize parameter-efficient fine-tuning by projecting downstream task annotations from publicly available surgical datasets into the language domain.
Results: Extensive experiments in two surgical domains demonstrate the effectiveness of our approach, with performance improvements of up to 7% in phase segmentation tasks, 5% in zero-shot phase segmentation, and comparable capabilities to fully supervised models in few-shot settings. Harnessing our model's capabilities for long-range temporal localization and text generation, we present the first comprehensive solution for dense video captioning (DVC) of surgical videos, addressing this task despite the absence of existing DVC datasets in the surgical domain.
Conclusion: We introduce a novel approach to surgical workflow understanding that leverages video-language pretraining, large-scale video pretraining, and optimized fine-tuning. Our method improves performance over state-of-the-art techniques and enables new downstream tasks for surgical video understanding.
Keywords: Dense video captioning; Multi-model video understanding; Surgical workflow recognition; Vision-language models.
© 2025. CARS.
Conflict of interest statement
Declarations. Conflict of interest: David Gastager was supported by Carl ZEISS AG and affiliated with the Technical University of Munich (TUM). Ghazal Ghazaei is employed by Carl ZEISS AG and collaborates with TUM and TU Darmstadt. Constantin Patsch is supported by TUM. Ethical approval: This article does not contain any studies with human participants or animals performed by any of the authors.
Similar articles
-
Learning multi-modal representations by watching hundreds of surgical video lectures.Med Image Anal. 2025 Oct;105:103644. doi: 10.1016/j.media.2025.103644. Epub 2025 Jun 4. Med Image Anal. 2025. PMID: 40513506
-
Fine-tuning medical language models for enhanced long-contextual understanding and domain expertise.Quant Imaging Med Surg. 2025 Jun 6;15(6):5450-5462. doi: 10.21037/qims-2024-2655. Epub 2025 Jun 3. Quant Imaging Med Surg. 2025. PMID: 40606333 Free PMC article.
-
MaskTrack: Auto-Labeling and Stable Tracking for Video Object Segmentation.IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):12052-12065. doi: 10.1109/TNNLS.2024.3469959. IEEE Trans Neural Netw Learn Syst. 2025. PMID: 39437285
-
Factors that influence parents' and informal caregivers' views and practices regarding routine childhood vaccination: a qualitative evidence synthesis.Cochrane Database Syst Rev. 2021 Oct 27;10(10):CD013265. doi: 10.1002/14651858.CD013265.pub2. Cochrane Database Syst Rev. 2021. PMID: 34706066 Free PMC article.
-
Perceptions and experiences of the prevention, detection, and management of postpartum haemorrhage: a qualitative evidence synthesis.Cochrane Database Syst Rev. 2023 Nov 27;11(11):CD013795. doi: 10.1002/14651858.CD013795.pub2. Cochrane Database Syst Rev. 2023. PMID: 38009552 Free PMC article.
References
-
- Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL-HLT
-
- Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. NeurIPS
-
- Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I (2021) Learning transferable visual models from natural language supervision. ICML
-
- Zellers R, Lu J, Lu X, Yu Y, Zhao Y, Salehi M, Kusupati A, Hessel J, Farhadi A, Choi Y (2022) Merlot reserve: Neural script knowledge through vision and language and sound. In: CVPR. https://doi.org/10.1109/CVPR52688.2022.01589
-
- Liu J, Chen S, He X, Guo L, Zhu X, Wang W, Tang J (2024) Valor: vision-audio-language omni-perception pretraining model and dataset. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2024.3479776 - DOI - PubMed - PMC
Grants and funding
LinkOut - more resources
Full Text Sources