论文标题
基于私人联盟学习的基于投票的方法
Voting-based Approaches For Differentially Private Federated Learning
论文作者
论文摘要
私人联合学习(DPFL)是一个具有许多应用程序的新兴领域。基于梯度平均的DPFL方法需要昂贵的通信回合,并且由于其附加噪声的显式依赖性,因此几乎无法与大容量模型一起使用。在这项工作中,受到知识转移的启发,我们从Papernot等人(2017; 2018)中学习,我们通过在从每个本地模型中返回的数据标签中进行投票,而不是平均梯度,而不是避免尺寸依赖尺寸并显着降低通信成本,从而设计了两个新的DPFL方案。从理论上讲,通过应用安全的多方计算,当投票得分的边距较大时,我们可以指数增强(数据依赖)隐私性保证。广泛的实验表明,我们的方法显着改善了与DPFL最新技术相比的隐私 - 实用性权衡。
Differentially Private Federated Learning (DPFL) is an emerging field with many applications. Gradient averaging based DPFL methods require costly communication rounds and hardly work with large-capacity models, due to the explicit dimension dependence in its added noise. In this work, inspired by knowledge transfer non-federated privacy learning from Papernot et al.(2017; 2018), we design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients, which avoids the dimension dependence and significantly reduces the communication cost. Theoretically, by applying secure multi-party computation, we could exponentially amplify the (data-dependent) privacy guarantees when the margin of the voting scores are large. Extensive experiments show that our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.