论文标题
关于数据中毒的致命剂量猜想
Lethal Dose Conjecture on Data Poisoning
论文作者
论文摘要
数据中毒考虑了一个对手,该对手扭曲了用于恶意目的的机器学习算法的训练集。在这项工作中,我们揭示了一个关于数据中毒基本原理的猜想,我们称之为致命的剂量猜想。猜想指出:如果需要$ n $清洁训练样品才能进行准确的预测,则在大小 - $ n $训练套件中,只能容忍$θ(n/n)$中毒样品,同时确保准确性。从理论上讲,我们在多种情况下验证了这一猜想。我们还通过分配歧视提供了对这种猜想的更一般的看法。深度分区聚合(DPA)及其扩展,有限聚合(FA)是最新的可证明防御数据中毒的方法,他们通过使用给定的学习者对来自不同培训子集培训的许多基本模型进行了多数投票。猜想意味着DPA和FA都是最佳的 - 如果我们拥有最高的学习者,他们可以将其变成针对数据中毒的最强大的防御能力之一。这概述了一种实用方法,可以通过寻找数据效率的学习者来开发更强大的防御能力。从经验上讲,作为概念的证明,我们表明,通过简单地为基础学习者使用不同的数据增强,我们可以分别将DPA在CIFAR-10和GTSRB上的认证稳健性和三倍增加一倍,而无需牺牲准确性。
Data poisoning considers an adversary that distorts the training set of machine learning algorithms for malicious purposes. In this work, we bring to light one conjecture regarding the fundamentals of data poisoning, which we call the Lethal Dose Conjecture. The conjecture states: If $n$ clean training samples are needed for accurate predictions, then in a size-$N$ training set, only $Θ(N/n)$ poisoned samples can be tolerated while ensuring accuracy. Theoretically, we verify this conjecture in multiple cases. We also offer a more general perspective of this conjecture through distribution discrimination. Deep Partition Aggregation (DPA) and its extension, Finite Aggregation (FA) are recent approaches for provable defenses against data poisoning, where they predict through the majority vote of many base models trained from different subsets of training set using a given learner. The conjecture implies that both DPA and FA are (asymptotically) optimal -- if we have the most data-efficient learner, they can turn it into one of the most robust defenses against data poisoning. This outlines a practical approach to developing stronger defenses against poisoning via finding data-efficient learners. Empirically, as a proof of concept, we show that by simply using different data augmentations for base learners, we can respectively double and triple the certified robustness of DPA on CIFAR-10 and GTSRB without sacrificing accuracy.