论文标题

联邦学习中数据分类不平衡的数据损失

Fed-Focal Loss for imbalanced data classification in Federated Learning

论文作者

Sarkar, Dipankar, Narang, Ankur, Rai, Sumit

论文摘要

联合学习设置具有中央服务器,可协调设备网络上模型的培训。挑战之一是当数据集患有类不平衡时,可变培训性能。在本文中,我们通过引入称为FED-COCAL损失的新损失功能来解决此问题。我们建议通过重塑交叉渗透损失来解决类别的失衡,以使其下降,使分配给局部损失的损失分为良好的示例。此外,通过利用可调节的采样框架,我们考虑了中央服务器上选择性客户端模型贡献,以进一步在训练期间集中探测器,从而提高其鲁棒性。使用详细的实验分析,采用虚拟(变异联合多任务学习)方法,我们在平衡和不平衡的场景中都表现出了MNIST,Femnist,VSN和HAR基准的卓越性能。我们在不平衡的MNIST基准测试中获得了9%以上(绝对百分比)的改善。我们进一步表明,我们的技术可以在多种联合学习算法中采用以获得改进。

The Federated Learning setting has a central server coordinating the training of a model on a network of devices. One of the challenges is variable training performance when the dataset has a class imbalance. In this paper, we address this by introducing a new loss function called Fed-Focal Loss. We propose to address the class imbalance by reshaping cross-entropy loss such that it down-weights the loss assigned to well-classified examples along the lines of focal loss. Additionally, by leveraging a tunable sampling framework, we take into account selective client model contributions on the central server to further focus the detector during training and hence improve its robustness. Using a detailed experimental analysis with the VIRTUAL (Variational Federated Multi-Task Learning) approach, we demonstrate consistently superior performance in both the balanced and unbalanced scenarios for MNIST, FEMNIST, VSN and HAR benchmarks. We obtain a more than 9% (absolute percentage) improvement in the unbalanced MNIST benchmark. We further show that our technique can be adopted across multiple Federated Learning algorithms to get improvements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源