论文标题

Calfat:用标签偏斜校准的联合对抗训练

CalFAT: Calibrated Federated Adversarial Training with Label Skewness

论文作者

Chen, Chen, Liu, Yuchen, Ma, Xingjun, Lyu, Lingjuan

论文摘要

最近的研究表明,像传统的机器学习一样,联合学习(FL)也容易受到对抗攻击的影响。为了改善FL的对抗性鲁棒性,已提出联邦对抗训练(FAT)方法在全球聚集之前在本地应用对抗性训练。尽管这些方法在独立分布(IID)数据上显示出令人鼓舞的结果,但它们遭受了带有标签偏度的非IID数据的训练不稳定性,从而导致自然精度降低。这往往会阻碍脂肪在现实世界应用中的应用,在现实世界中,客户经常偏向于客户的标签分布。在本文中,我们研究了标签偏度下的脂肪问题,并揭示了训练不稳定性和自然准确性降解问题的一个根本原因:偏斜的标签导致了非相同的类概率和异质的本地模型。然后,我们提出了一种校准的脂肪(CALFAT)方法来通过适应性地校准逻辑以平衡班级来解决不稳定性问题。我们在理论上和经验上都表明,Calfat的优化导致客户群体均匀的本地模型和更好的收敛点。

Recent studies have shown that, like traditional machine learning, federated learning (FL) is also vulnerable to adversarial attacks. To improve the adversarial robustness of FL, federated adversarial training (FAT) methods have been proposed to apply adversarial training locally before global aggregation. Although these methods demonstrate promising results on independent identically distributed (IID) data, they suffer from training instability on non-IID data with label skewness, resulting in degraded natural accuracy. This tends to hinder the application of FAT in real-world applications where the label distribution across the clients is often skewed. In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models. We then propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes. We show both theoretically and empirically that the optimization of CalFAT leads to homogeneous local models across the clients and better convergence points.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源