论文标题
贝叶斯学者的挑战和陷阱
Challenges and Pitfalls of Bayesian Unlearning
论文作者
论文摘要
Machine Unering是指删除培训数据子集的任务,从而删除其对训练有素的模型的贡献。近似学习是该任务的一类方法,避免了需要在保留数据上从头开始重新研究模型。贝叶斯的规则可用于将近似学习作为推理问题,目的是通过划分删除数据的可能性来获得更新后的后部。但是,这有自己的挑战集,因为人们通常无法访问模型参数的确切后方。在这项工作中,我们检查了拉普拉斯近似和变异推理的使用以获得更新的后验。通过将回归任务训练的神经网络作为指导示例,我们在实用情况下就贝叶斯大学的适用性进行了见解。
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model. Approximate unlearning are one class of methods for this task which avoid the need to retrain the model from scratch on the retained data. Bayes' rule can be used to cast approximate unlearning as an inference problem where the objective is to obtain the updated posterior by dividing out the likelihood of deleted data. However this has its own set of challenges as one often doesn't have access to the exact posterior of the model parameters. In this work we examine the use of the Laplace approximation and Variational Inference to obtain the updated posterior. With a neural network trained for a regression task as the guiding example, we draw insights on the applicability of Bayesian unlearning in practical scenarios.