Fine-tuning large language models in federated learning with fairness-aware prompt selection
- PMID: 41072284
- DOI: 10.1016/j.neunet.2025.108160
Fine-tuning large language models in federated learning with fairness-aware prompt selection
Abstract
Large language models (LLMs) require domain-specific fine-tuning for real-world deployment, yet face critical barriers of data privacy and computational constraints. Federated learning (FL) provides an indispensable solution by enabling collaborative tuning across distributed private data sources while preserving confidentiality. However, existing FL-LLM methods suffer from non-IID degradation, communication overhead, and fairness issues. To address these challenges, this paper proposes FedPSF-LLM, a novel FL framework integrating three core innovations: (1) the Prompt Selection Module (PSM) adaptively selects high-impact prompt parameters to reduce transmission costs; (2) the Dynamic Weighting Module (DWM) adjusts aggregation weights based on client contribution and data disparity; (3) the Attention-Based Bias Mitigation (ABM) corrects aggregation bias via alignment-aware reweighting. Extensive experiments on 10 NLP tasks and 4 LLMs demonstrate that FedPSF-LLM improves fairness while maintaining strong overall performance. Compared to state-of-the-art methods, it reduces accuracy variance by 52.1 %, improves worst-client accuracy by 8.6 %, and narrows small-large client performance gaps by 74.4 %, while maintaining 76.8 % global accuracy. These results demonstrate superiority over 8 baselines in both fairness metrics and communication efficiency, establishing a new paradigm for privacy-preserving and fairness-guaranteed LLM deployment in federated systems.
Keywords: Communication efficiency; Fairness-aware fine-tuning; Federated learning; Large language models; Prompt selection.
Copyright © 2025. Published by Elsevier Ltd.
Conflict of interest statement
Declaration of competing interest We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.
LinkOut - more resources
Full Text Sources
Miscellaneous