论文标题
具有潜在状态推理和时空关系的自主驾驶的强化学习
Reinforcement Learning for Autonomous Driving with Latent State Inference and Spatial-Temporal Relationships
论文作者
论文摘要
深度强化学习(DRL)为在复杂的自主驾驶场景中学习导航提供了一种有希望的方法。但是,确定可能表明结果截然不同的细微线索仍然是设计在人类环境中运行的自主系统的开放问题。在这项工作中,我们表明,在增强学习框架中明确推断潜在状态并编码时空关系可以帮助解决这一困难。我们通过将强化学习者与有监督的学习者结合的框架编码有关其他驱动因素潜在状态的先验知识。此外,我们通过图神经网络(GNN)对不同车辆之间的影响进行建模。与最先进的基线方法相比,提议的框架在导航T交流的背景下可显着提高性能。
Deep reinforcement learning (DRL) provides a promising way for learning navigation in complex autonomous driving scenarios. However, identifying the subtle cues that can indicate drastically different outcomes remains an open problem with designing autonomous systems that operate in human environments. In this work, we show that explicitly inferring the latent state and encoding spatial-temporal relationships in a reinforcement learning framework can help address this difficulty. We encode prior knowledge on the latent states of other drivers through a framework that combines the reinforcement learner with a supervised learner. In addition, we model the influence passing between different vehicles through graph neural networks (GNNs). The proposed framework significantly improves performance in the context of navigating T-intersections compared with state-of-the-art baseline approaches.