论文标题

通过稀疏放大隐私和自适应优化的联合学习

Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization

论文作者

Hu, Rui, Gong, Yanmin, Guo, Yuanxiong

论文摘要

联合学习(FL)使分布式代理可以协作学习集中式模型,而无需相互共享其原始数据。但是,数据局部性不能提供足够的隐私保护,并且希望以严格的差异隐私(DP)保证为FL提供促进。现有的DP机制将引入与模型大小成正比的随机噪声,在深神经网络中可能很大。在本文中,我们提出了一个新的FL框架,并具有稀疏放大的隐私。我们的方法将随机的稀疏与每个代理的梯度扰动集成在一起,以扩大隐私保证。由于稀疏将增加实现一定目标准确性所需的通信巡回赛数量,这对于DP保证是不利的,因此我们进一步引入加速技术以帮助降低隐私成本。我们严格分析方法的融合,并利用Renyi DP严格说明端到端DP保证。在基准数据集上进行的广泛实验验证了我们的方法在隐私保证和沟通效率方面都优于以前的差异性FL方法。

Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源