Abstract
Existing device scheduling methods in wireless fed-erated learning (FL) mainly focused on selecting the devices with maximum gradient norm or loss function and requires all devices to perform local training in each round. This may produce extra training costs and schedule devices with similar data statistics, thus degrading learning performance. To mitigate these prob-lems, we proposed to schedule a subset of representative devices and find the corresponding pre-device stepsizes to approximate the full participation aggregated gradient. Taking into account the limited wireless bandwidth, we formulate an optimization problem to capture the trade-off between representativity and latency by optimizing device scheduling and bandwidth allocation policies. Our analysis reveals optimal bandwidth allocation is achieved when all scheduled devices have the same latency consisting of computation and communication latencies. Then, by proving the non-monotone sub modularity of the problem, we develop a double greedy algorithm to solve the device scheduling policy. To avoid the local training of unscheduled devices, we utilize the historical gradient information of devices to estimate the current gradient for device scheduling design. Experimental results show that the proposed latency- and representativity-aware scheduling algorithm saves over 16% and 12% training time for MNIST and CIFAR-I0 datasets than the scheduling algorithms based on either latency and representativity individually.
Original language | English |
---|---|
DOIs | |
Publication status | Published - 2022 |
Event | 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2022 - Male, Maldives Duration: 16 Nov 2022 → 18 Nov 2022 |
Conference
Conference | 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2022 |
---|---|
Country/Territory | Maldives |
City | Male |
Period | 16/11/2022 → 18/11/2022 |
Keywords
- Device scheduling
- resource allocation
- submodular optimization
- wireless federated Learning