论文标题

联合残留学习

Federated Residual Learning

论文作者

Agarwal, Alekh, Langford, John, Wei, Chen-Yu

论文摘要

我们研究了一种新的联邦学习形式,客户在其中培训个性化的本地模型,并通过服务器端共享模型共同做出预测。使用这个新的联合学习框架,可以将中央共享模型的复杂性最小化,同时仍获得联合培训提供的所有绩效优势。我们的框架对于数据异质性是可靠的,解决了慢速收敛问题传统联邦学习方法在数据非I.I.D时面对。跨客户。我们通过经验测试该理论,并发现基准的大量绩效提高。

We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model. Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides. Our framework is robust to data heterogeneity, addressing the slow convergence problem traditional federated learning methods face when the data is non-i.i.d. across clients. We test the theory empirically and find substantial performance gains over baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源