论文标题

斯特拉格勒的个性化联合学习

Straggler-Resilient Personalized Federated Learning

论文作者

Tziotis, Isidoros, Shen, Zebang, Pedarsani, Ramtin, Hassani, Hamed, Mokhtari, Aryan

论文摘要

Federated Learning是一种新兴的学习范式,它允许从大型客户网络分发的样本中的培训模型,同时尊重隐私和沟通限制。尽管取得了成功,但联邦学习仍面临与其分散性质有关的几个挑战。在这项工作中,我们开发了一种新颖的算法程序,并通过理论上的速度确保同时处理这些障碍中的两个,即(i)数据异质性,即数据分布在客户端之间可能有很大的变化,并且(ii)系统异质性,即客户的计算能力可能有很大差异。我们的方法依赖于代表学习理论的想法来使用所有客户的数据找到一个全局通用表示形式,并学习一组特定于用户的参数,从而为每个客户提供个性化解决方案。此外,我们的方法通过根据客户的计算特征和统计学意义来自适应选择散乱者的影响,从而首次实现接近最佳的样本复杂性和可证明的对数加速。实验结果支持我们的理论发现,表明我们方法比系统和数据异质环境中的替代个性化联合方案的优越性。

Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions. Despite its success, federated learning faces several challenges related to its decentralized nature. In this work, we develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles, namely (i) data heterogeneity, i.e., data distributions can vary substantially across clients, and (ii) system heterogeneity, i.e., the computational power of the clients could differ significantly. Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client. Furthermore, our method mitigates the effects of stragglers by adaptively selecting clients based on their computational characteristics and statistical significance, thus achieving, for the first time, near optimal sample complexity and provable logarithmic speedup. Experimental results support our theoretical findings showing the superiority of our method over alternative personalized federated schemes in system and data heterogeneous environments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源