论文标题
边缘网络中的沟通延迟的联合学习
Federated Learning with Communication Delay in Edge Networks
论文作者
论文摘要
联邦学习已引起了大幅关注,作为通过边缘网络分发机器学习(ML)模型培训的潜在解决方案。这项工作解决了网络边缘联合学习的重要考虑因素:边缘节点和聚合器之间的通信延迟。开发了一种称为FedDelavg(联合延迟平均)的技术,该技术概括了在同步步骤中在当前局部模型与在每个设备接收到的延迟全局模型之间合并的标准联合平均算法。通过理论分析,FedDelavg实现的全球模型损失衍生出上限,该界限揭示了学习绩效对加权和学习率值的强烈依赖性。对流行的ML任务的实验结果表明,在优化加权方案以说明延迟方案时,收敛速度的显着改善。
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks. This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator. A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step. Through theoretical analysis, an upper bound is derived on the global model loss achieved by FedDelAvg, which reveals a strong dependency of learning performance on the values of the weighting and learning rate. Experimental results on a popular ML task indicate significant improvements in terms of convergence speed when optimizing the weighting scheme to account for delays.