论文标题

重新审视的粒子视频:使用点轨迹通过遮挡进行跟踪

Particle Video Revisited: Tracking Through Occlusions Using Point Trajectories

论文作者

Harley, Adam W., Fang, Zhaoyuan, Fragkiadaki, Katerina

论文摘要

通常将视频中的跟踪像素作为光流估计问题进行研究,其中每个像素都用位移向量描述,该位移向量将其定位在下一帧中。即使可以免费获得更广泛的时间上下文,但要考虑到这一点的事先努力仅在2框方法上产生了少量收益。在本文中,我们重新访问Sand and Teller的“粒子视频”方法,并将像素跟踪作为远程运动估计问题进行研究,其中每个像素都用轨迹描述,将其定位在以后的多个帧中。我们使用该组件重新构建了这种经典方法,该组件可以驱动流量和对象跟踪中最新的最新方法,例如密集的成本图,迭代优化和学习的外观更新。我们使用从现有的光流数据中挖出的远程Amodal点轨迹训练我们的模型,并通过多帧的遮挡合成增强,这些轨迹会增强。我们在轨迹估计基准和关键点标签传播任务中测试了我们的方法,并与最新的光流和特征跟踪方法进行了比较。

Tracking pixels in videos is typically studied as an optical flow estimation problem, where every pixel is described with a displacement vector that locates it in the next frame. Even though wider temporal context is freely available, prior efforts to take this into account have yielded only small gains over 2-frame methods. In this paper, we revisit Sand and Teller's "particle video" approach, and study pixel tracking as a long-range motion estimation problem, where every pixel is described with a trajectory that locates it in multiple future frames. We re-build this classic approach using components that drive the current state-of-the-art in flow and object tracking, such as dense cost maps, iterative optimization, and learned appearance updates. We train our models using long-range amodal point trajectories mined from existing optical flow data that we synthetically augment with multi-frame occlusions. We test our approach in trajectory estimation benchmarks and in keypoint label propagation tasks, and compare favorably against state-of-the-art optical flow and feature tracking methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源