论文标题

捏:深度学习模型的对抗性提取攻击框架

PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models

论文作者

Hackett, William, Trawicki, Stefan, Yu, Zhengxin, Suri, Neeraj, Garraghan, Peter

论文摘要

对抗性提取攻击构成了对深度学习(DL)模型的阴险威胁,其目的是窃取目标DL模型的体系结构,参数和超参数。现有的提取攻击文献观察到了不同DL模型和数据集的攻击成功水平,但其易感性背后的根本原因通常仍不清楚,并且将有助于促进创建安全的DL系统。在本文中,我们介绍捏:一个有效且自动化的提取攻击框架,能够设计,部署和分析跨异构硬件平台的提取攻击方案。我们使用紧缩,对21个模型架构的提取攻击进行了广泛的实验评估,以探索新的提取攻击场景和进一步的攻击阶段。我们的发现表明(1)关键的提取特性,特定模型配置对特定攻击表现出很强的韧性,(2)即使是部分提取成功,也可以进一步分期对其他对抗性攻击,以及(3)等效的被盗模型发现表达力量的差异,但表现出了相似的知识。

Adversarial extraction attacks constitute an insidious threat against Deep Learning (DL) models in-which an adversary aims to steal the architecture, parameters, and hyper-parameters of a targeted DL model. Existing extraction attack literature have observed varying levels of attack success for different DL models and datasets, yet the underlying cause(s) behind their susceptibility often remain unclear, and would help facilitate creating secure DL systems. In this paper we present PINCH: an efficient and automated extraction attack framework capable of designing, deploying, and analyzing extraction attack scenarios across heterogeneous hardware platforms. Using PINCH, we perform extensive experimental evaluation of extraction attacks against 21 model architectures to explore new extraction attack scenarios and further attack staging. Our findings show (1) key extraction characteristics whereby particular model configurations exhibit strong resilience against specific attacks, (2) even partial extraction success enables further staging for other adversarial attacks, and (3) equivalent stolen models uncover differences in expressive power, yet exhibit similar captured knowledge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源