论文标题

联邦学习及其用途中的本地模型重建攻击

Local Model Reconstruction Attacks in Federated Learning and their Uses

论文作者

Driouich, Ilias, Xu, Chuan, Neglia, Giovanni, Giroire, Frederic, Thomas, Eoin

论文摘要

在本文中,我们启动了针对联合学习的本地模型重建攻击的研究,在该研究中,诚实但有趣的对手窃听了目标客户和服务器之间交换的消息,然后重建受害者的本地/个性化模型。本地模型重建攻击允许对手以更有效的方式触发其他经典攻击,因为本地模型仅取决于客户端的数据,并且比服务器所学的全局模型可以泄漏更多的私人信息。此外,我们提出了一种基于模型的新型属性推理攻击,以利用本地模型重建攻击。我们为此属性推理攻击提供了一个分析下限。使用现实世界数据集的经验结果证实,我们的本地重建攻击在回归和分类任务方面都非常有效。此外,我们基于对联邦学习中最先进的攻击的新颖属性推理攻击。我们的攻击会导致更高的重建精度,尤其是当客户的数据集是异质的时候。我们的工作为设计强大而可解释的攻击提供了一个新的角度,以有效地量化FL中的隐私风险。

In this paper, we initiate the study of local model reconstruction attacks for federated learning, where a honest-but-curious adversary eavesdrops the messages exchanged between a targeted client and the server, and then reconstructs the local/personalized model of the victim. The local model reconstruction attack allows the adversary to trigger other classical attacks in a more effective way, since the local model only depends on the client's data and can leak more private information than the global model learned by the server. Additionally, we propose a novel model-based attribute inference attack in federated learning leveraging the local model reconstruction attack. We provide an analytical lower-bound for this attribute inference attack. Empirical results using real world datasets confirm that our local reconstruction attack works well for both regression and classification tasks. Moreover, we benchmark our novel attribute inference attack against the state-of-the-art attacks in federated learning. Our attack results in higher reconstruction accuracy especially when the clients' datasets are heterogeneous. Our work provides a new angle for designing powerful and explainable attacks to effectively quantify the privacy risk in FL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源