论文标题
个性化联合学习
Layer-wised Model Aggregation for Personalized Federated Learning
论文作者
论文摘要
个性化联合学习(PFL)不仅可以从广泛的分布式数据中捕获常见的先验,还可以为异构客户提供定制模型。在过去的几年中,研究应用了加权聚合方式来产生个性化模型,在这种模型中,通过校准整个模型参数或损失值的距离来确定权重,并且尚未考虑对聚合过程的层级影响,从而导致滞后模型收敛并在非IID数据集对非数字的个性化。在本文中,我们提出了一个新颖的PFL培训框架,称为层化的个性化联合学习(PFEDLA),可以辨别从不同客户的每一层的重要性,因此能够优化具有异质数据的客户的个性化模型聚合。具体而言,我们在服务器端使用每个客户端使用专用的超网络,该训练以识别层粒度的相互贡献因素。同时,引入了一种参数化的机制来更新层状的聚合权重,以逐步利用使用者间的相似性并实现准确的模型个性化。广泛的实验是在不同的模型和学习任务上进行的,我们表明所提出的方法的性能明显高于最先进的PFL方法。
Personalized Federated Learning (pFL) not only can capture the common priors from broad range of distributed data, but also support customized models for heterogeneous clients. Researches over the past few years have applied the weighted aggregation manner to produce personalized models, where the weights are determined by calibrating the distance of the entire model parameters or loss values, and have yet to consider the layer-level impacts to the aggregation process, leading to lagged model convergence and inadequate personalization over non-IID datasets. In this paper, we propose a novel pFL training framework dubbed Layer-wised Personalized Federated learning (pFedLA) that can discern the importance of each layer from different clients, and thus is able to optimize the personalized model aggregation for clients with heterogeneous data. Specifically, we employ a dedicated hypernetwork per client on the server side, which is trained to identify the mutual contribution factors at layer granularity. Meanwhile, a parameterized mechanism is introduced to update the layer-wised aggregation weights to progressively exploit the inter-user similarity and realize accurate model personalization. Extensive experiments are conducted over different models and learning tasks, and we show that the proposed methods achieve significantly higher performance than state-of-the-art pFL methods.