论文标题
联邦学习中的网络级对手
Network-Level Adversaries in Federated Learning
论文作者
论文摘要
Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy. Prior work identified a range of security threats on federated learning protocols that poison the data or the model.但是,联合学习是一个网络系统,客户与服务器之间的通信对于学习任务绩效起着至关重要的作用。我们强调了沟通如何在联邦学习中引入另一个漏洞表面,并研究网络级对手对训练联合学习模型的影响。我们表明,从精心挑选的客户中删除网络流量的攻击者可以显着降低目标人群的模型准确性。 Moreover, we show that a coordinated poisoning campaign from a few clients can amplify the dropping attacks.最后,我们开发了服务器端防御,通过识别和上采样的客户可能对目标准确性有积极贡献,从而减轻了攻击的影响。假设网络部分可见性的加密通信渠道和攻击者,我们全面评估了三个数据集上的攻击和防御。
Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy. Prior work identified a range of security threats on federated learning protocols that poison the data or the model. However, federated learning is a networked system where the communication between clients and server plays a critical role for the learning task performance. We highlight how communication introduces another vulnerability surface in federated learning and study the impact of network-level adversaries on training federated learning models. We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population. Moreover, we show that a coordinated poisoning campaign from a few clients can amplify the dropping attacks. Finally, we develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy. We comprehensively evaluate our attacks and defenses on three datasets, assuming encrypted communication channels and attackers with partial visibility of the network.