论文标题
VERIFI:迈向可验证的联合学学习
VeriFi: Towards Verifiable Federated Unlearning
论文作者
论文摘要
联合学习(FL)是一个协作学习范式,参与者在不共享其私人数据的情况下共同培训强大的模型。 FL的一个理想属性是实施被遗忘的权利(RTBF),即离开参与者有权请求从全球模型中删除其私人数据。但是,除非能够独立验证学习效果,否则未学习本身可能不足以实施RTBF,这是当前文献中忽略的重要方面。在本文中,我们提示了可验证的联合未学习的概念,并提出了verifi,这是一个整合联合未学习和验证的统一框架,允许对其效果进行系统分析,并通过多种未学习和验证方法组合进行了不同的学习和量化。在Verifi中,离开参与者被授予验证权利(RTV),即参与者在离开之前通知服务器,然后在接下来的几个通信回合中积极验证未学习效果。接收到离开通知后,在服务器侧进行了验证,而验证是由离开参与者通过两个步骤在本地完成的:标记(注射精心设计的标记物以指纹指纹,以检查Leaver)并检查(检查标记上的全球模型的更改)。基于Verifi,我们考虑了7种未学习方法和5种验证方法,进行了首次进行联合核心学习的系统和大规模研究。特别是,我们提出了一种更有效,更友好的学习方法,以及两种更有效,更强大的非侵入性验证方法。我们在7个数据集和4种类型的深度学习模型上广泛评估Verifi。我们的分析确立了重要的经验理解,以实现更值得信赖的联邦学习。
Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data. One desirable property for FL is the implementation of the right to be forgotten (RTBF), i.e., a leaving participant has the right to request to delete its private data from the global model. However, unlearning itself may not be enough to implement RTBF unless the unlearning effect can be independently verified, an important aspect that has been overlooked in the current literature. In this paper, we prompt the concept of verifiable federated unlearning, and propose VeriFi, a unified framework integrating federated unlearning and verification that allows systematic analysis of the unlearning and quantification of its effect, with different combinations of multiple unlearning and verification methods. In VeriFi, the leaving participant is granted the right to verify (RTV), that is, the participant notifies the server before leaving, then actively verifies the unlearning effect in the next few communication rounds. The unlearning is done at the server side immediately after receiving the leaving notification, while the verification is done locally by the leaving participant via two steps: marking (injecting carefully-designed markers to fingerprint the leaver) and checking (examining the change of the global model's performance on the markers). Based on VeriFi, we conduct the first systematic and large-scale study for verifiable federated unlearning, considering 7 unlearning methods and 5 verification methods. Particularly, we propose a more efficient and FL-friendly unlearning method, and two more effective and robust non-invasive-verification methods. We extensively evaluate VeriFi on 7 datasets and 4 types of deep learning models. Our analysis establishes important empirical understandings for more trustworthy federated unlearning.