论文标题

Safenet:合奏在私人协作学习中的不合理效力

SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning

论文作者

Chaudhari, Harsh, Jagielski, Matthew, Oprea, Alina

论文摘要

已经提出了安全的多党计算(MPC),以允许多个相互不信任的数据所有者在其合并数据上共同训练机器学习(ML)模型。但是,通过设计,MPC协议忠实地计算了训练功能,对抗ML社区已证明该功能泄漏了私人信息,并且可以在中毒攻击中篡改。在这项工作中,我们认为在我们的框架中实现的称为Safenet的模型合奏是一种避免许多对抗性ML攻击的高度MPC无限的方法。 MPC培训中所有者之间数据的自然分区允许这种方法在训练时可以高度扩展,可证明可保护免受中毒攻击的保护,并证明可以防御许多隐私攻击。我们证明了Safenet对在端到端和转移学习方案训练的几个机器学习数据集和模型上中毒的效率,准确性和弹性。例如,Safenet大大降低了后门攻击的成功,而与Dalskov等人的四方MPC框架相比,获得$ 39 \ times $ $ $ $ 36 \ times $ $。我们的实验表明,连续体仍然保留这些好处,即使在许多非IID环境中也是如此。结合的简单性,廉价的设置和鲁棒性特性使其成为MPC私下培训ML模型的强大首选。

Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data. However, by design, MPC protocols faithfully compute the training functionality, which the adversarial ML community has shown to leak private information and can be tampered with in poisoning attacks. In this work, we argue that model ensembles, implemented in our framework called SafeNet, are a highly MPC-amenable way to avoid many adversarial ML attacks. The natural partitioning of data amongst owners in MPC training allows this approach to be highly scalable at training time, provide provable protection from poisoning attacks, and provably defense against a number of privacy attacks. We demonstrate SafeNet's efficiency, accuracy, and resilience to poisoning on several machine learning datasets and models trained in end-to-end and transfer learning scenarios. For instance, SafeNet reduces backdoor attack success significantly, while achieving $39\times$ faster training and $36 \times$ less communication than the four-party MPC framework of Dalskov et al. Our experiments show that ensembling retains these benefits even in many non-iid settings. The simplicity, cheap setup, and robustness properties of ensembling make it a strong first choice for training ML models privately in MPC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源