论文标题
通过运动表现协同作用改善无监督的视频对象细分
Improving Unsupervised Video Object Segmentation with Motion-Appearance Synergy
论文作者
论文摘要
我们提出IMA,一种方法,该方法将视频中的主要对象分离出,而无需在培训或推理中进行手动注释。无监督的视频对象分割(UVO)中的先前方法已证明运动的有效性是分割的输入或监督。但是,在具有反射的可变形对象和对象等情况下,运动信号可能是无信息的,甚至是误导性的,从而导致分割不令人满意。 相比之下,IMA通过运动表现协同作用来改善UVO。我们的方法有两个训练阶段:1)一个动作监督的对象发现阶段,该阶段通过可学习的残留途径来处理运动表现冲突; 2)具有低水平和高级外观监督的精炼阶段,以纠正从误导运动提示中学到的模型误解。 此外,我们提出运动语义对齐方式作为一种模型不合时宜的HyperParam调整方法。我们证明了它在调整先前用人类注释或手工制作的超帕拉姆特异性指标调整的关键超帕兰氏症中的有效性。 IMA大大提高了几种常见的UVOS基准的细分质量。例如,在Davis16基准测试中,我们仅使用标准重新连接和卷积头来超越先前的方法。我们打算发布我们的代码以供未来的研究和应用。
We present IMAS, a method that segments the primary objects in videos without manual annotation in training or inference. Previous methods in unsupervised video object segmentation (UVOS) have demonstrated the effectiveness of motion as either input or supervision for segmentation. However, motion signals may be uninformative or even misleading in cases such as deformable objects and objects with reflections, causing unsatisfactory segmentation. In contrast, IMAS achieves Improved UVOS with Motion-Appearance Synergy. Our method has two training stages: 1) a motion-supervised object discovery stage that deals with motion-appearance conflicts through a learnable residual pathway; 2) a refinement stage with both low- and high-level appearance supervision to correct model misconceptions learned from misleading motion cues. Additionally, we propose motion-semantic alignment as a model-agnostic annotation-free hyperparam tuning method. We demonstrate its effectiveness in tuning critical hyperparams previously tuned with human annotation or hand-crafted hyperparam-specific metrics. IMAS greatly improves the segmentation quality on several common UVOS benchmarks. For example, we surpass previous methods by 8.3% on DAVIS16 benchmark with only standard ResNet and convolutional heads. We intend to release our code for future research and applications.