论文标题
通过拜占庭敏感的三重态距离安全性的联合学习
Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance
论文作者
论文摘要
尽管是在多个边缘设备上学习共享模型的有效框架,但联邦学习(FL)通常容易受到对抗性边缘设备的拜占庭式攻击。尽管现有在FL上仅通过汇总服务器端的本地模型的子集来减轻此类折衷设备的作品,但由于不精确的评分规则,它们仍然无法成功忽略异常值。在本文中,我们提出了一个有效的拜占庭式FL框架,即虚拟的对比汇总,通过定义一种新颖的评分函数,该函数敏感地区分该模型是否已中毒。关键思想是从每个本地模型以及先前的全局模型中提取基本信息,以类似于三重态损失的方式定义距离度量。与最先进的拜占庭式聚集方法相比,数值结果通过表现出改善的性能来验证所提出的方法的优势,例如Krum,Trimmed-Mean和Fang。
While being an effective framework of learning a shared model across multiple edge devices, federated learning (FL) is generally vulnerable to Byzantine attacks from adversarial edge devices. While existing works on FL mitigate such compromised devices by only aggregating a subset of the local models at the server side, they still cannot successfully ignore the outliers due to imprecise scoring rule. In this paper, we propose an effective Byzantine-robust FL framework, namely dummy contrastive aggregation, by defining a novel scoring function that sensitively discriminates whether the model has been poisoned or not. Key idea is to extract essential information from every local models along with the previous global model to define a distance measure in a manner similar to triplet loss. Numerical results validate the advantage of the proposed approach by showing improved performance as compared to the state-of-the-art Byzantine-resilient aggregation methods, e.g., Krum, Trimmed-mean, and Fang.