Benchmarking pathology foundation models: Adaptation strategies and scenarios
- PMID: 40179809
- DOI: 10.1016/j.compbiomed.2025.110031
Benchmarking pathology foundation models: Adaptation strategies and scenarios
Abstract
In computational pathology, several foundation models have recently developed, demonstrating enhanced learning capability for analyzing pathology images. However, adapting these models to various downstream tasks remains challenging, particularly when faced with datasets from different sources and acquisition conditions, as well as limited data availability. In this study, we benchmark four pathology-specific foundation models across 20 datasets and two scenarios - consistency assessment and flexibility assessment - addressing diverse adaptation scenarios and downstream tasks. In the consistency assessment scenario, involving five fine-tuning methods, we found that the parameter-efficient fine-tuning approach was both efficient and effective for adapting pathology-specific foundation models to diverse datasets within the same classification tasks. For slide-level survival prediction, the performance of foundation models depended on the choice of feature aggregation mechanisms and the characteristics of data. In the flexibility assessment scenario under data-limited environments, utilizing five few-shot learning methods, we observed that the foundation models benefited more from the few-shot learning methods that involve modification during the testing phase only. These findings provide insights that could guide the deployment of pathology-specific foundation models in real clinical settings, potentially improving the accuracy and reliability of pathology image analysis. The code for this study is available at https://github.com/QuIIL/BenchmarkingPathologyFoundationModels.
Keywords: Computational pathology; Few-shot learning; Fine-tuning; Foundation model.
Copyright © 2025 The Authors. Published by Elsevier Ltd.. All rights reserved.
Conflict of interest statement
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Similar articles
-
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation.Med Image Anal. 2025 Jul;103:103596. doi: 10.1016/j.media.2025.103596. Epub 2025 May 2. Med Image Anal. 2025. PMID: 40347917
-
Large-scale benchmarking and boosting transfer learning for medical image analysis.Med Image Anal. 2025 May;102:103487. doi: 10.1016/j.media.2025.103487. Epub 2025 Feb 21. Med Image Anal. 2025. PMID: 40117988
-
Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images.Med Image Anal. 2024 Oct;97:103258. doi: 10.1016/j.media.2024.103258. Epub 2024 Jul 4. Med Image Anal. 2024. PMID: 38996667
-
Revolutionizing Digital Pathology With the Power of Generative Artificial Intelligence and Foundation Models.Lab Invest. 2023 Nov;103(11):100255. doi: 10.1016/j.labinv.2023.100255. Epub 2023 Sep 26. Lab Invest. 2023. PMID: 37757969 Review.
-
Continual learning in medical image analysis: A survey.Comput Biol Med. 2024 Nov;182:109206. doi: 10.1016/j.compbiomed.2024.109206. Epub 2024 Sep 26. Comput Biol Med. 2024. PMID: 39332115 Review.
MeSH terms
LinkOut - more resources
Full Text Sources