论文标题
带有理论保证的联合学习的随机签名SGD
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
论文作者
论文摘要
联邦学习(FL)已成为一种著名的分布式学习范式。佛罗里达州需要通过融合的理论保证来开发新的参数估计方法的一些紧迫需求,这些方法在异质数据分布设置中也是沟通有效的,私密的和拜占庭式的弹性。基于量化的SGD求解器已在FL中广泛采用,最近提出的以多数票的标志表明了有希望的方向。但是,没有现有的方法享受上述所有属性。在本文中,我们提出了一种基于符号的直觉上简化但理论上的方法,以弥合差距。我们提出了随机签名SGD,该SGD利用了新型的基于随机的梯度压缩机,在统一框架中实现了上述特性。我们还提出了提议的随机签名SGD的错误反馈变体,该变体进一步改善了FL中的学习绩效。我们使用MNIST数据集和CIFAR-10数据集上的深神经网络进行广泛的实验测试提出的方法。实验结果证实了该方法的有效性。
Federated learning (FL) has emerged as a prominent distributed learning paradigm. FL entails some pressing needs for developing novel parameter estimation approaches with theoretical guarantees of convergence, which are also communication efficient, differentially private and Byzantine resilient in the heterogeneous data distribution settings. Quantization-based SGD solvers have been widely adopted in FL and the recently proposed SIGNSGD with majority vote shows a promising direction. However, no existing methods enjoy all the aforementioned properties. In this paper, we propose an intuitively-simple yet theoretically-sound method based on SIGNSGD to bridge the gap. We present Stochastic-Sign SGD which utilizes novel stochastic-sign based gradient compressors enabling the aforementioned properties in a unified framework. We also present an error-feedback variant of the proposed Stochastic-Sign SGD which further improves the learning performance in FL. We test the proposed method with extensive experiments using deep neural networks on the MNIST dataset and the CIFAR-10 dataset. The experimental results corroborate the effectiveness of the proposed method.