论文标题
“真正的攻击者不计算梯度”:弥合对手ML研究与实践之间的差距
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
论文作者
论文摘要
近年来,对对抗机器学习的研究激增。许多论文表明,针对各种机器学习(ML)模型的强大算法攻击,许多其他论文提出了可以承受大多数攻击的防御措施。但是,大量的现实世界证据表明,实际攻击者使用简单的策略来颠覆ML驱动的系统,因此安全从业人员没有优先考虑对抗性ML防御。 由于研究人员和从业人员之间明显的差距,该立场论文旨在弥合两个领域。我们首先介绍了三个现实的案例研究,我们可以从研究中收集实际的见解或研究中未知的见解。接下来,我们分析了最近在顶级安全会议上发表的所有对抗性ML论文,强调了积极的趋势和盲点。最后,我们在确切和成本驱动的威胁建模,行业与学术界的协作以及可重复的研究方面陈述了立场。我们认为,如果采用我们的立场,将增加未来努力在对抗性ML中的现实影响,从而使研究人员和从业人员更接近他们提高ML系统安全性的共同目标。
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.