论文标题

共享自主驾驶的跨模式轨迹预测

Shared Cross-Modal Trajectory Prediction for Autonomous Driving

论文作者

Choi, Chiho, Choi, Joon Hee, Li, Jiachen, Malla, Srikanth

论文摘要

在高度交互式环境中预测交通代理的未来轨迹是自主驾驶系统安全运行的必不可少的问题。根据自动驾驶车辆配备各种类型的传感器(例如Lidar扫描仪,RGB摄像头,雷达等)的事实,我们提出了一个跨模式嵌入框架,旨在从使用多种输入方式中受益。在培训时,我们的模型通过共同优化不同类型的输入数据的目标函数来学习在共享潜在空间中嵌入一组互补功能。在测试时,需要单个输入模态(例如,激光雷达数据)才能从输入的角度(即在激光雷达空间中)生成预测,同时从经过多种传感器方式训练的模型中获得优势。进行了广泛的评估,以显示使用两个基准驾驶数据集的拟议框架的功效。

Predicting future trajectories of traffic agents in highly interactive environments is an essential and challenging problem for the safe operation of autonomous driving systems. On the basis of the fact that self-driving vehicles are equipped with various types of sensors (e.g., LiDAR scanner, RGB camera, radar, etc.), we propose a Cross-Modal Embedding framework that aims to benefit from the use of multiple input modalities. At training time, our model learns to embed a set of complementary features in a shared latent space by jointly optimizing the objective functions across different types of input data. At test time, a single input modality (e.g., LiDAR data) is required to generate predictions from the input perspective (i.e., in the LiDAR space), while taking advantages from the model trained with multiple sensor modalities. An extensive evaluation is conducted to show the efficacy of the proposed framework using two benchmark driving datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源