Sign-Entropy Regularization for Personalized Federated Learning
- PMID: 40566188
- PMCID: PMC12191723
- DOI: 10.3390/e27060601
Sign-Entropy Regularization for Personalized Federated Learning
Abstract
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes' Rule of Signs, we hypothesize that frequent sign changes in gradient trajectories reflect complexity in the local loss landscape. By minimizing the entropy of gradient sign patterns during local updates, SER encourages smoother optimization paths, improves convergence stability, and enhances personalization. We formally define a differentiable sign-entropy objective over the gradient sign distribution and integrate it into standard federated optimization frameworks, including FedAvg and FedProx. The regularizer is computed efficiently and applied post hoc per local round. Extensive experiments on three benchmark datasets (FEMNIST, Shakespeare, and CIFAR-10) show that SER improves both average and worst-case client accuracy, reduces variance across clients, accelerates convergence, and smooths the local loss surface as measured by Hessian trace and spectral norm. We also present a sensitivity analysis of the regularization strength ρ and discuss the potential for client-adaptive variants. Comparative evaluations against state-of-the-art methods (e.g., Ditto, pFedMe, momentum-based variants, Entropy-SGD) highlight that SER introduces an orthogonal and scalable mechanism for personalization. Theoretically, we frame SER as an information-theoretic and geometric regularizer that stabilizes learning dynamics without requiring dual-model structures or communication modifications. This work opens avenues for trajectory-based regularization and hybrid entropy-guided optimization in federated and resource-constrained learning settings.
Keywords: entropy regularization; federated learning; gradient sign patterns; personalization; polynomial root geometry.
Conflict of interest statement
The author declares no conflicts of interest.
Figures







Similar articles
-
FedEmerge: An Entropy-Guided Federated Learning Method for Sensor Networks and Edge Intelligence.Sensors (Basel). 2025 Jun 14;25(12):3728. doi: 10.3390/s25123728. Sensors (Basel). 2025. PMID: 40573615 Free PMC article.
-
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3. Cochrane Database Syst Rev. 2022. PMID: 35593186 Free PMC article.
-
FedOpenHAR: Federated Multitask Transfer Learning for Sensor-Based Human Activity Recognition.J Comput Biol. 2025 Jun;32(6):558-572. doi: 10.1089/cmb.2024.0631. Epub 2025 Apr 23. J Comput Biol. 2025. PMID: 40267073
-
Spectral entropy monitoring for adults and children undergoing general anaesthesia.Cochrane Database Syst Rev. 2016 Mar 14;3(3):CD010135. doi: 10.1002/14651858.CD010135.pub2. Cochrane Database Syst Rev. 2016. PMID: 26976247 Free PMC article.
-
Active body surface warming systems for preventing complications caused by inadvertent perioperative hypothermia in adults.Cochrane Database Syst Rev. 2016 Apr 21;4(4):CD009016. doi: 10.1002/14651858.CD009016.pub2. Cochrane Database Syst Rev. 2016. PMID: 27098439 Free PMC article.
Cited by
-
Adaptive information-constrained mapping for feature compression in edge AI and federated systems.Sci Rep. 2025 Aug 22;15(1):30915. doi: 10.1038/s41598-025-16604-2. Sci Rep. 2025. PMID: 40847043 Free PMC article.
References
-
- McMahan H.B., Moore E., Ramage D., Hampson S., y Arcas B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics; Fort Lauderdale, FL, USA. 20–22 April 2017; pp. 1273–1282.
-
- Hu X., Li S., Liu Y. Generalization Bounds for Federated Learning: Fast Rates, Unparticipating Clients and Unbounded Losses; Proceedings of the 2023 International Conference on Learning Representations; Kigali, Rwanda. 1–5 May 2023; [(accessed on 20 April 2025)]. Available online: https://openreview.net/forum?id=-EHqoysUYLx.
-
- Zhao R., Zheng Y., Yu H., Jiang W., Yang Y., Tang Y., Wang L. From Sample Poverty to Rich Feature Learning: A New Metric Learning Method for Few-Shot Classification. IEEE Access. 2024;12:124990–125002. doi: 10.1109/ACCESS.2024.3444483. - DOI
-
- Li T., Hu S., Beirami A., Smith V. Ditto: Fair and Robust Federated Learning Through Personalization; Proceedings of the 38th International Conference on Machine Learning (ICML); Virtual Event. 18–24 July 2021.
-
- Arivazhagan M., Aggarwal V., Singh A.K., Choudhary S. Federated Learning with Personalization Layers. arXiv. 20191912.00818
LinkOut - more resources
Full Text Sources