论文标题

观察到深厚的强化学习中的对手

Observed Adversaries in Deep Reinforcement Learning

论文作者

Lim, Eugene, Soh, Harold

论文摘要

在这项工作中,我们指出了观察到的深层政策对手的问题。具体而言,最近的工作表明,深厚的强化学习容易受到对抗性攻击的影响,在这种攻击下,在环境限制下观察到的对手行为以引起自然但对抗性观察。此设置与HRI特别相关,因为与HRI相关的机器人有望与其他代理一起执行其任务。在这项工作中,我们证明即使观察到低维度,这种效果仍然存在。我们进一步表明,这些对抗性攻击跨受害者的转移,这有可能允许恶意攻击者训练对手,而无需接触目标受害者。

In this work, we point out the problem of observed adversaries for deep policies. Specifically, recent work has shown that deep reinforcement learning is susceptible to adversarial attacks where an observed adversary acts under environmental constraints to invoke natural but adversarial observations. This setting is particularly relevant for HRI since HRI-related robots are expected to perform their tasks around and with other agents. In this work, we demonstrate that this effect persists even with low-dimensional observations. We further show that these adversarial attacks transfer across victims, which potentially allows malicious attackers to train an adversary without access to the target victim.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源