论文标题

整合固有和外在的解释性:理解神经网络对于人类机器人相互作用的相关性

Integrating Intrinsic and Extrinsic Explainability: The Relevance of Understanding Neural Networks for Human-Robot Interaction

论文作者

Weber, Tom, Wermter, Stefan

论文摘要

可解释的人工智能(XAI)可以帮助建立智能和自主系统的信任和接受。此外,了解代理人行为的动机会导致机器人与人之间的更好,更成功的合作。但是,人类不仅可以从机器人的解释中受益,而且机器人本身也可以从给他的解释中受益。当前,大多数注意力都引起了解释深度神经网络和黑盒模型。但是,其中许多方法不适用于类人形机器人。因此,在该立场论文中,描述了将XAI方法适应可解释的神经机构的当前问题。此外,引入了一个开源的人形机器人平台Nico,以及机器人本身的内在解释以及环境提供的外部解释的相互作用如何实现有效的机器人行为。

Explainable artificial intelligence (XAI) can help foster trust in and acceptance of intelligent and autonomous systems. Moreover, understanding the motivation for an agent's behavior results in better and more successful collaborations between robots and humans. However, not only can humans benefit from a robot's explanation but the robot itself can also benefit from explanations given to him. Currently, most attention is paid to explaining deep neural networks and black-box models. However, a lot of these approaches are not applicable to humanoid robots. Therefore, in this position paper, current problems with adapting XAI methods to explainable neurorobotics are described. Furthermore, NICO, an open-source humanoid robot platform, is introduced and how the interaction of intrinsic explanations by the robot itself and extrinsic explanations provided by the environment enable efficient robotic behavior.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源