论文标题
0毫米:用单眼事件摄像机零射击多动作分段
0-MMS: Zero-Shot Multi-Motion Segmentation With A Monocular Event Camera
论文作者
论文摘要
动态场景中移动对象的分割是导航任务的场景理解的关键过程。在这种情况下,经典的摄像机会使它们变得模糊。相反,事件摄像机由于其时间高的分辨率和缺乏运动模糊,因此针对此问题量身定制。我们提出了一种单眼多动分段的方法,该方法将自下而上的功能跟踪和自上而下的运动补偿结合到统一的管道中,据我们所知,这是同类管道。使用时间间隔内的事件,我们的方法通过分裂和合并来将场景分为多个动作。我们通过使用运动传播和群集键的概念进一步加快了我们的方法。 该方法在来自EV-IMO,EED和MOD数据集的具有挑战性的现实世界和合成场景上都得到了成功的评估,并以12 \%的速度优于最先进的检测率,从而达到了新的最先进的平均检测率,达到了81.06%,94.2%和82.35%的最新平均检测率。为了实现对多运动分割的进一步研究和系统评估,我们介绍并开源一个称为MOD ++的新数据集/基准,其中包括具有挑战性的序列以及相机和物体运动的广泛数据分层,速度幅度,速度,方向和旋转速度。
Segmentation of moving objects in dynamic scenes is a key process in scene understanding for navigation tasks. Classical cameras suffer from motion blur in such scenarios rendering them effete. On the contrary, event cameras, because of their high temporal resolution and lack of motion blur, are tailor-made for this problem. We present an approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline, which is the first of its kind to our knowledge. Using the events within a time-interval, our method segments the scene into multiple motions by splitting and merging. We further speed up our method by using the concept of motion propagation and cluster keyslices. The approach was successfully evaluated on both challenging real-world and synthetic scenarios from the EV-IMO, EED, and MOD datasets and outperformed the state-of-the-art detection rate by 12\%, achieving a new state-of-the-art average detection rate of 81.06%, 94.2% and 82.35% on the aforementioned datasets. To enable further research and systematic evaluation of multi-motion segmentation, we present and open-source a new dataset/benchmark called MOD++, which includes challenging sequences and extensive data stratification in-terms of camera and object motion, velocity magnitudes, direction, and rotational speeds.