论文标题

部分可观测时空混沌系统的无模型预测

FedCC: Robust Federated Learning against Model Poisoning Attacks

论文作者

Jeong, Hyejun, Son, Hamin, Lee, Seohu, Hyun, Jayun, Chung, Tai-Myoung

论文摘要

联合学习是一个旨在解决隐私问题的分布式框架。但是,它引入了新的攻击表面,当数据非独立且分布相同时,它们尤其容易发生。在这种情况下,现有方法无法有效减轻恶意影响;以前的方法通常会分别处理非IID数据和中毒攻击。为了同时解决这两个挑战,我们提出了FedCC,这是一种针对模型中毒攻击的简单而有效的防御算法。它利用倒数层表示的中心内核对齐相似性进行聚类,即使在非IID数据设置中,也可以识别和过滤恶意客户端。倒数第二层表示是有意义的,因为以后的层对本地数据分布更敏感,这可以更好地检测恶意客户端。以层为中心的内核比对相似性的复杂利用允许攻击缓解,同时利用有用的知识。我们的广泛实验表明,FedCC在减轻非靶向模型中毒和有针对性的后门攻击方面的有效性。与现有的基于基于异常检测的异常值和基于一阶统计的方法相比,FedCC始终将攻击信心降低到零。具体而言,它将全球性能的平均降解显着最低降低了65.5%。我们认为,这种关于汇总的新观点使其对FL模型安全和隐私领域的领域做出了宝贵的贡献。该代码将在接受后提供。

Federated learning is a distributed framework designed to address privacy concerns. However, it introduces new attack surfaces, which are especially prone when data is non-Independently and Identically Distributed. Existing approaches fail to effectively mitigate the malicious influence in this setting; previous approaches often tackle non-IID data and poisoning attacks separately. To address both challenges simultaneously, we present FedCC, a simple yet effective novel defense algorithm against model poisoning attacks. It leverages the Centered Kernel Alignment similarity of Penultimate Layer Representations for clustering, allowing the identification and filtration of malicious clients, even in non-IID data settings. The penultimate layer representations are meaningful since the later layers are more sensitive to local data distributions, which allows better detection of malicious clients. The sophisticated utilization of layer-wise Centered Kernel Alignment similarity allows attack mitigation while leveraging useful knowledge obtained. Our extensive experiments demonstrate the effectiveness of FedCC in mitigating both untargeted model poisoning and targeted backdoor attacks. Compared to existing outlier detection-based and first-order statistics-based methods, FedCC consistently reduces attack confidence to zero. Specifically, it significantly minimizes the average degradation of global performance by 65.5\%. We believe that this new perspective on aggregation makes it a valuable contribution to the field of FL model security and privacy. The code will be made available upon acceptance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源