论文标题

目前的学习证明比您想象的要损坏

Proof-of-Learning is Currently More Broken Than You Think

论文作者

Fang, Congyu, Jia, Hengrui, Thudi, Anvith, Yaghini, Mohammad, Choquette-Choo, Christopher A., Dullerud, Natalie, Chandrasekaran, Varun, Papernot, Nicolas

论文摘要

学习证明(POL)建议模型所有者记录培训检查站,以确定已经花费了培训所需的计算的证明。 POL FIREGO密码方法和贸易严格的安全性的作者可确保可扩展到深度学习。他们从经验上说明了这种方法的好处,通过展示欺骗性的欺骗性 - 对被盗模型的证明是昂贵的,就像通过训练模型诚实地获得证据一样昂贵。但是,最近的工作提供了反例,因此使这一观察结果无效。 在这项工作中,我们首先证明,尽管当前的POL验证对对手并不强大,但最近的工作在很大程度上低估了这种缺乏鲁棒性。这是因为现有的欺骗策略是不可再生的,要么是目标弱的实例化 - 付出的态度很容易通过改变验证的超参数而挫败。取而代之的是,我们介绍了可以在POL验证的不同配置中复制的第一个欺骗策略,并且可以以先前欺骗策略的一小部分来完成。这是可能的,因为我们确定了POL的关键漏洞,并系统地分析了对证明的强大验证所需的基本假设。从理论方面来说,我们展示了如何实现这些假设将学习理论中的问题减少到开放问题。我们得出的结论是,如果没有进一步了解深度学习中的优化,就无法开发出可证明的强大的POL验证机制。

Proof-of-Learning (PoL) proposes that a model owner logs training checkpoints to establish a proof of having expended the computation necessary for training. The authors of PoL forego cryptographic approaches and trade rigorous security guarantees for scalability to deep learning. They empirically argued the benefit of this approach by showing how spoofing--computing a proof for a stolen model--is as expensive as obtaining the proof honestly by training the model. However, recent work has provided a counter-example and thus has invalidated this observation. In this work we demonstrate, first, that while it is true that current PoL verification is not robust to adversaries, recent work has largely underestimated this lack of robustness. This is because existing spoofing strategies are either unreproducible or target weakened instantiations of PoL--meaning they are easily thwarted by changing hyperparameters of the verification. Instead, we introduce the first spoofing strategies that can be reproduced across different configurations of the PoL verification and can be done for a fraction of the cost of previous spoofing strategies. This is possible because we identify key vulnerabilities of PoL and systematically analyze the underlying assumptions needed for robust verification of a proof. On the theoretical side, we show how realizing these assumptions reduces to open problems in learning theory.We conclude that one cannot develop a provably robust PoL verification mechanism without further understanding of optimization in deep learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源