Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images
- PMID: 38996667
- DOI: 10.1016/j.media.2024.103258
Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images
Abstract
Foundation models pre-trained on large-scale data have been widely witnessed to achieve success in various natural imaging downstream tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt foundation models to new domains by updating only a small portion of parameters in order to reduce computational overhead. However, the effectiveness of these PEFT methods, especially in cross-domain few-shot scenarios, e.g., medical image analysis, has not been fully explored. In this work, we facilitate the study of the performance of PEFT when adapting foundation models to medical image classification tasks. Furthermore, to alleviate the limitations of prompt introducing ways and approximation capabilities on Transformer architectures of mainstream prompt tuning methods, we propose the Embedded Prompt Tuning (EPT) method by embedding prompt tokens into the expanded channels. We also find that there are anomalies in the feature space distribution of foundation models during pre-training process, and prompt tuning can help mitigate this negative impact. To explain this phenomenon, we also introduce a novel perspective to understand prompt tuning: Prompt tuning is a distribution calibrator. And we support it by analysing patch-wise scaling and feature separation operations contained in EPT. Our experiments show that EPT outperforms several state-of-the-art fine-tuning methods by a significant margin on few-shot medical image classification tasks, and completes the fine-tuning process within highly competitive time, indicating EPT is an effective PEFT method. The source code is available at github.com/zuwenqiang/EPT.
Keywords: Few-shot medical image analysis; Foundation model; Parameter-efficient fine-tuning; Visual prompt tuning.
Copyright © 2024 Elsevier B.V. All rights reserved.
Conflict of interest statement
Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Lei Ma reports a relationship with Peking University that includes: employment. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Similar articles
-
DVPT: Dynamic Visual Prompt Tuning of large pre-trained models for medical image analysis.Neural Netw. 2025 May;185:107168. doi: 10.1016/j.neunet.2025.107168. Epub 2025 Jan 16. Neural Netw. 2025. PMID: 39827840
-
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation.Med Image Anal. 2025 Jul;103:103596. doi: 10.1016/j.media.2025.103596. Epub 2025 May 2. Med Image Anal. 2025. PMID: 40347917
-
Parameter Efficient Fine-tuning of Transformer-based Masked Autoencoder Enhances Resource Constrained Neuroimage Analysis.bioRxiv [Preprint]. 2025 Feb 20:2025.02.15.638442. doi: 10.1101/2025.02.15.638442. bioRxiv. 2025. PMID: 40027656 Free PMC article. Preprint.
-
Knowledge-enhanced Parameter-efficient Transfer Learning with METER for medical vision-language tasks.J Biomed Inform. 2025 Jun;166:104840. doi: 10.1016/j.jbi.2025.104840. Epub 2025 May 8. J Biomed Inform. 2025. PMID: 40348310
-
Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features.IEEE Trans Image Process. 2024;33:6309-6323. doi: 10.1109/TIP.2024.3468874. Epub 2024 Dec 27. IEEE Trans Image Process. 2024. PMID: 39446552
MeSH terms
LinkOut - more resources
Full Text Sources