论文标题

对基于运动传感器的用户识别深度学习系统的对抗性攻击

Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors

论文作者

Benegui, Cezara, Ionescu, Radu Tudor

论文摘要

目前,移动设备采用隐式身份验证机制,即解锁模式,销钉或基于生物识别的系统,例如指纹或面部识别。尽管这些系统容易受到众所周知的攻击,但引入明确且不引人注目的身份验证层可以极大地提高安全性。在这项研究中,我们专注于基于运动传感器信号的明确身份验证的深度学习方法。在这种情况下,攻击者可以制作对抗性示例,目的是获得未经授权的访问,甚至限制合法用户访问其移动设备。据我们所知,这是第一个旨在量化基于运动传感器用户识别的机器学习模型的对抗性攻击对机器学习模型的影响。为了实现我们的目标,我们研究了多种生成对抗示例的方法。我们提出了三个有关对抗性示例的影响和普遍性的研究问题,并进行了相关实验,以回答我们的研究问题。我们的经验结果表明,某些对抗性示例生成方法是特定于攻击分类模型的,而另一些则倾向于通用。因此,我们得出的结论是,在给予对抗性输入时,基于运动传感器的用户识别任务训练的深神经网络会遭受错误分类的较高比例。

For the time being, mobile devices employ implicit authentication mechanisms, namely, unlock patterns, PINs or biometric-based systems such as fingerprint or face recognition. While these systems are prone to well-known attacks, the introduction of an explicit and unobtrusive authentication layer can greatly enhance security. In this study, we focus on deep learning methods for explicit authentication based on motion sensor signals. In this scenario, attackers could craft adversarial examples with the aim of gaining unauthorized access and even restraining a legitimate user to access his mobile device. To our knowledge, this is the first study that aims at quantifying the impact of adversarial attacks on machine learning models used for user identification based on motion sensors. To accomplish our goal, we study multiple methods for generating adversarial examples. We propose three research questions regarding the impact and the universality of adversarial examples, conducting relevant experiments in order to answer our research questions. Our empirical results demonstrate that certain adversarial example generation methods are specific to the attacked classification model, while others tend to be generic. We thus conclude that deep neural networks trained for user identification tasks based on motion sensors are subject to a high percentage of misclassification when given adversarial input.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源