论文标题

灵活的垂直联合学习与异构政党

Flexible Vertical Federated Learning with Heterogeneous Parties

论文作者

Castiglia, Timothy, Wang, Shiqiang, Patterson, Stacy

论文摘要

我们提出了灵活的垂直联合学习(FLEX-VFL),这是一种分布式机器算法,该算法在带有垂直分区的数据的分布式系统中训练平滑,非凸功能。我们考虑一个系统的系统,他们希望协作学习全球功能。每个方持有本地数据集;数据集具有不同的功能,但共享相同的示例ID空间。各方本质上是异质的:当事方的运营速度,本地模型体系结构和优化器可能会彼此不同,此外,它们可能会随着时间而变化。为了在这种系统中训练全局模型,Flex-VFL利用了平行块坐标下降的形式,在该形式中,各方通过随机坐标下降训练全局模型的分区。我们为Flex-VFL提供了理论收敛分析,并表明收敛速率受方速度和局部优化器参数的约束。我们应用此分析并扩展算法以适应政党学习率,以响应变化的速度和局部优化器参数。最后,我们比较了Flex-VFL与同步和异步VFL算法的收敛时间,并说明了我们自适应扩展的有效性。

We propose Flexible Vertical Federated Learning (Flex-VFL), a distributed machine algorithm that trains a smooth, non-convex function in a distributed system with vertically partitioned data. We consider a system with several parties that wish to collaboratively learn a global function. Each party holds a local dataset; the datasets have different features but share the same sample ID space. The parties are heterogeneous in nature: the parties' operating speeds, local model architectures, and optimizers may be different from one another and, further, they may change over time. To train a global model in such a system, Flex-VFL utilizes a form of parallel block coordinate descent, where parties train a partition of the global model via stochastic coordinate descent. We provide theoretical convergence analysis for Flex-VFL and show that the convergence rate is constrained by the party speeds and local optimizer parameters. We apply this analysis and extend our algorithm to adapt party learning rates in response to changing speeds and local optimizer parameters. Finally, we compare the convergence time of Flex-VFL against synchronous and asynchronous VFL algorithms, as well as illustrate the effectiveness of our adaptive extension.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源