论文标题

拜占庭弹性分布式多任务学习

Byzantine Resilient Distributed Multi-Task Learning

论文作者

Li, Jiani, Abbas, Waseem, Koutsoukos, Xenofon

论文摘要

分布式多任务学习在具有异质数据源的多代理网络中提供了显着的优势,在这些数据源中,代理人旨在同时学习不同但相关的模型。在本文中,我们提出了一种拜占庭弹性分布式多任务学习的方法。我们通过使用代理数据及其邻居的模型来衡量累积的损失来提出有效的在线体重分配规则。累积损失很小,表明这两个任务之间有很大的相似性。为了确保普通药物的聚集的拜占庭式弹性,我们引入了滤除更大损失的步骤。我们分析了凸模型的方法,并表明正常的代理会弹性地与全球最小值融合。算,与拟议的重量分配规则的聚合始终会导致预期的遗憾,而不是非合作案例。最后,我们使用三个案例研究(包括回归和分类问题)证明了这种方法,并表明我们的方法对非convex模型(例如卷积神经网络)表现出良好的经验性能。

Distributed multi-task learning provides significant advantages in multi-agent networks with heterogeneous data sources where agents aim to learn distinct but correlated models simultaneously.However, distributed algorithms for learning relatedness among tasks are not resilient in the presence of Byzantine agents. In this paper, we present an approach for Byzantine resilient distributed multi-task learning. We propose an efficient online weight assignment rule by measuring the accumulated loss using an agent's data and its neighbors' models. A small accumulated loss indicates a large similarity between the two tasks. In order to ensure the Byzantine resilience of the aggregation at a normal agent, we introduce a step for filtering out larger losses. We analyze the approach for convex models and show that normal agents converge resiliently towards the global minimum.Further, aggregation with the proposed weight assignment rule always results in an improved expected regret than the non-cooperative case. Finally, we demonstrate the approach using three case studies, including regression and classification problems, and show that our method exhibits good empirical performance for non-convex models, such as convolutional neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源