论文标题

AutoTrajectory:使用动态点从视频中从视频中提取和预测的无标签轨迹提取和预测

AutoTrajectory: Label-free Trajectory Extraction and Prediction from Videos using Dynamic Points

论文作者

Ma, Yuexin, ZHU, Xinge, Cheng, Xinjing, Yang, Ruigang, Liu, Jiming, Manocha, Dinesh

论文摘要

当前的轨迹预测方法以监督的方式运行,因此需要大量的相应地面真相数据进行培训。在本文中,我们介绍了一种新颖的,无标签的算法,自动标记,用于直接使用原始视频的轨迹提取和预测。为了更好地捕获视频中的移动对象,我们引入动态点。我们使用它们通过使用前向下提取器来保持时间一致性并使用图像重建来以无监督的方式保持空间一致性来建模动态运动。然后,我们将动态点汇总到实例点,该点代表移动对象,例如视频中的行人。最后,我们通过匹配实例点进行预测训练来提取轨迹。据我们所知,我们的方法是第一个实现轨迹提取和预测的无监督学习的方法。我们评估了众所周知的轨迹数据集的性能,并表明我们的方法对现实世界的视频有效,并且可以使用原始视频进一步提高现有模型的性能。

Current methods for trajectory prediction operate in supervised manners, and therefore require vast quantities of corresponding ground truth data for training. In this paper, we present a novel, label-free algorithm, AutoTrajectory, for trajectory extraction and prediction to use raw videos directly. To better capture the moving objects in videos, we introduce dynamic points. We use them to model dynamic motions by using a forward-backward extractor to keep temporal consistency and using image reconstruction to keep spatial consistency in an unsupervised manner. Then we aggregate dynamic points to instance points, which stand for moving objects such as pedestrians in videos. Finally, we extract trajectories by matching instance points for prediction training. To the best of our knowledge, our method is the first to achieve unsupervised learning of trajectory extraction and prediction. We evaluate the performance on well-known trajectory datasets and show that our method is effective for real-world videos and can use raw videos to further improve the performance of existing models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源