Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Oct 1:194:108160.
doi: 10.1016/j.neunet.2025.108160. Online ahead of print.

Fine-tuning large language models in federated learning with fairness-aware prompt selection

Affiliations

Fine-tuning large language models in federated learning with fairness-aware prompt selection

Yalan Jiang et al. Neural Netw. .

Abstract

Large language models (LLMs) require domain-specific fine-tuning for real-world deployment, yet face critical barriers of data privacy and computational constraints. Federated learning (FL) provides an indispensable solution by enabling collaborative tuning across distributed private data sources while preserving confidentiality. However, existing FL-LLM methods suffer from non-IID degradation, communication overhead, and fairness issues. To address these challenges, this paper proposes FedPSF-LLM, a novel FL framework integrating three core innovations: (1) the Prompt Selection Module (PSM) adaptively selects high-impact prompt parameters to reduce transmission costs; (2) the Dynamic Weighting Module (DWM) adjusts aggregation weights based on client contribution and data disparity; (3) the Attention-Based Bias Mitigation (ABM) corrects aggregation bias via alignment-aware reweighting. Extensive experiments on 10 NLP tasks and 4 LLMs demonstrate that FedPSF-LLM improves fairness while maintaining strong overall performance. Compared to state-of-the-art methods, it reduces accuracy variance by 52.1 %, improves worst-client accuracy by 8.6 %, and narrows small-large client performance gaps by 74.4 %, while maintaining 76.8 % global accuracy. These results demonstrate superiority over 8 baselines in both fairness metrics and communication efficiency, establishing a new paradigm for privacy-preserving and fairness-guaranteed LLM deployment in federated systems.

Keywords: Communication efficiency; Fairness-aware fine-tuning; Federated learning; Large language models; Prompt selection.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.

LinkOut - more resources