TY - JOUR
T1 - Federated Learning With Adaptive Aggregation Weights for Non-IID Data in Edge Networks
AU - Li, Xiaodong
AU - Gao, Yulong
AU - Deng, Yansha
AU - Jiang, Xinzhuo
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) enables edge nodes to collaboratively train a global model under the coordination of a server without sharing local private data. However, data heterogeneity across nodes leads to serious performance degradation or even non-convergence of the learned model. To tackle this challenge, most existing methods typically involve either local model regularization or global model adjustments. Nevertheless, these methods primarily perform model aggregation based on the dataset size proportion, while the exploration of the impact of non-independent and identically distributed (non-IID) data on aggregation weights remains insufficient. To this end, we theoretically derive an analytical expression for the aggregation weights by minimizing the convergence upper bound of standard FL on non-IID data across nodes, achieving a tighter bound and superior convergence performance. Accordingly, we propose an adaptive aggregation weight strategy, called FedAAW. It can be easily incorporated into other FL methods to improve their convergence performance with negligible additional communication overhead. Extensive experiments on four common datasets show that FedAAW can effectively mitigate the performance degradation caused by data heterogeneity in various cases. Simply applying FedAAW to other methods can significantly improve their performance, achieving a maximum improvement of 37.32% in test accuracy and outperforming other state-of-the-art aggregation weight strategies.
AB - Federated learning (FL) enables edge nodes to collaboratively train a global model under the coordination of a server without sharing local private data. However, data heterogeneity across nodes leads to serious performance degradation or even non-convergence of the learned model. To tackle this challenge, most existing methods typically involve either local model regularization or global model adjustments. Nevertheless, these methods primarily perform model aggregation based on the dataset size proportion, while the exploration of the impact of non-independent and identically distributed (non-IID) data on aggregation weights remains insufficient. To this end, we theoretically derive an analytical expression for the aggregation weights by minimizing the convergence upper bound of standard FL on non-IID data across nodes, achieving a tighter bound and superior convergence performance. Accordingly, we propose an adaptive aggregation weight strategy, called FedAAW. It can be easily incorporated into other FL methods to improve their convergence performance with negligible additional communication overhead. Extensive experiments on four common datasets show that FedAAW can effectively mitigate the performance degradation caused by data heterogeneity in various cases. Simply applying FedAAW to other methods can significantly improve their performance, achieving a maximum improvement of 37.32% in test accuracy and outperforming other state-of-the-art aggregation weight strategies.
KW - adaptive aggregation weight
KW - edge networks
KW - Federated learning
KW - non-IID data
UR - http://www.scopus.com/inward/record.url?scp=85216862887&partnerID=8YFLogxK
U2 - 10.1109/TCCN.2025.3534248
DO - 10.1109/TCCN.2025.3534248
M3 - Article
AN - SCOPUS:85216862887
SN - 2332-7731
JO - IEEE Transactions on Cognitive Communications and Networking
JF - IEEE Transactions on Cognitive Communications and Networking
ER -