论文标题

假性雨:深度神经网络的对抗性攻击模仿自主系统摄像机镜头上的天气条件

fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems

论文作者

Marchisio, Alberto, Caramia, Giovanni, Martina, Maurizio, Shafique, Muhammad

论文摘要

最近,深层神经网络(DNNS)在许多应用中取得了出色的表现,而几项研究增强了其恶意攻击的脆弱性。在本文中,我们模仿自然天气条件的影响,以引入误导DNN的合理扰动。通过观察这种大气扰动对摄像机镜头的影响,我们对图案进行建模,以创建不同的面具,以伪造雨,雪和冰雹的影响。即使我们的攻击引起的扰动是可见的,但由于它们与自然事件的关联,它们的存在仍然没有引起人们的注意,这可能会对完全自主和无人驾驶的车辆尤其灾难性。我们测试了对多种卷积神经网络和胶囊网络模型的拟议的假汗攻击,并报告在存在此类对抗性扰动的情况下明显的准确性下降。我们的工作引入了针对DNN的新安全威胁,这对于关键的关键应用程序和自治系统尤为严重。

Recently, Deep Neural Networks (DNNs) have achieved remarkable performances in many applications, while several studies have enhanced their vulnerabilities to malicious attacks. In this paper, we emulate the effects of natural weather conditions to introduce plausible perturbations that mislead the DNNs. By observing the effects of such atmospheric perturbations on the camera lenses, we model the patterns to create different masks that fake the effects of rain, snow, and hail. Even though the perturbations introduced by our attacks are visible, their presence remains unnoticed due to their association with natural events, which can be especially catastrophic for fully-autonomous and unmanned vehicles. We test our proposed fakeWeather attacks on multiple Convolutional Neural Network and Capsule Network models, and report noticeable accuracy drops in the presence of such adversarial perturbations. Our work introduces a new security threat for DNNs, which is especially severe for safety-critical applications and autonomous systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源