论文标题

分散联盟学习的异质参与者的自适应配置

Adaptive Configuration for Heterogeneous Participants in Decentralized Federated Learning

论文作者

Liao, Yunming, Xu, Yang, Xu, Hongli, Wang, Lun, Qian, Chen

论文摘要

在网络边缘生成的数据可以通过利用边缘计算(EC)的范式来在本地处理。在EC的帮助下,分散的联合学习(DFL)克服了基于参数服务器(PS)的联合学习中的单点失败问题,它已成为一种实用而流行的方法,用于机器学习超过分布式数据。但是,DFL面临两个关键挑战,\ ie,Edge设备引入的系统异质性和统计异质性。为了确保与慢缘设备的存在快速收敛,我们提出了一种有效的DFL方法,称为FedHP,该方法集成了对本地更新频率和网络拓扑的自适应控制,以更好地支持异构参与者。我们在局部更新频率和网络拓扑之间建立了有关模型训练性能的理论关系,并获得收敛上限。为此,我们提出了一种优化算法,该算法可自适应地确定局部更新频率并构建网络拓扑,以加快收敛性并提高模型的准确性。评估结果表明,与基线相比,提议的FEDHP可以将完成时间减少约51%,并在异质方案中提高至少5%。

Data generated at the network edge can be processed locally by leveraging the paradigm of edge computing (EC). Aided by EC, decentralized federated learning (DFL), which overcomes the single-point-of-failure problem in the parameter server (PS) based federated learning, is becoming a practical and popular approach for machine learning over distributed data. However, DFL faces two critical challenges, \ie, system heterogeneity and statistical heterogeneity introduced by edge devices. To ensure fast convergence with the existence of slow edge devices, we present an efficient DFL method, termed FedHP, which integrates adaptive control of both local updating frequency and network topology to better support the heterogeneous participants. We establish a theoretical relationship between local updating frequency and network topology regarding model training performance and obtain a convergence upper bound. Upon this, we propose an optimization algorithm, that adaptively determines local updating frequencies and constructs the network topology, so as to speed up convergence and improve the model accuracy. Evaluation results show that the proposed FedHP can reduce the completion time by about 51% and improve model accuracy by at least 5% in heterogeneous scenarios, compared with the baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源