论文标题

一种用于捍卫联邦学习中隐私攻击的无准确的扰动方法

An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning

论文作者

Yang, Xue, Feng, Yan, Fang, Weijun, Shao, Jun, Tang, Xiaohu, Xia, Shu-Tao, Lu, Rongxing

论文摘要

尽管联合学习通过交换本地梯度或参数而不是原始数据来改善培训数据的隐私,但对手仍然可以利用本地梯度和参数来通过启动重建和成员资格推理攻击来获取本地培训数据。为了捍卫此类隐私攻击,许多噪音扰动方法(例如差异隐私或Countsketch矩阵)已被广泛设计。但是,不能同时确保这些方案的强大防御能力和高度学习准确性,这将阻碍FL在实践中的广泛应用(尤其是对于需要高精度和强大隐私保证的医疗或金融机构而言)。为了克服这个问题,在本文中,我们提出了\ emph(用于联合学习的有效模型扰动方法},以捍卫由好奇客户发起的重建和成员推理攻击。一方面,与差异隐私类似,我们的方法还选择了随机数,因为添加到全局模型参数中的扰动噪声时,它非常有效且易于整合在实践中。同时,随机选择的噪声是正实数,相应的值可以任意大,因此可以确保强大的防御能力。另一方面,与无法消除附加声音的差异隐私或其他扰动方法不同,我们的方法允许服务器通过消除添加的噪声来恢复真实的梯度。因此,我们的方法根本不会阻碍学习准确性。

Although federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data, the adversary still can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks. To defend such privacy attacks, many noises perturbation methods (like differential privacy or CountSketch matrix) have been widely designed. However, the strong defence ability and high learning accuracy of these schemes cannot be ensured at the same time, which will impede the wide application of FL in practice (especially for medical or financial institutions that require both high accuracy and strong privacy guarantee). To overcome this issue, in this paper, we propose \emph{an efficient model perturbation method for federated learning} to defend reconstruction and membership inference attacks launched by curious clients. On the one hand, similar to the differential privacy, our method also selects random numbers as perturbed noises added to the global model parameters, and thus it is very efficient and easy to be integrated in practice. Meanwhile, the random selected noises are positive real numbers and the corresponding value can be arbitrarily large, and thus the strong defence ability can be ensured. On the other hand, unlike differential privacy or other perturbation methods that cannot eliminate the added noises, our method allows the server to recover the true gradients by eliminating the added noises. Therefore, our method does not hinder learning accuracy at all.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源