论文标题

人类对象互动检测的视觉组成学习

Visual Compositional Learning for Human-Object Interaction Detection

论文作者

Hou, Zhi, Peng, Xiaojiang, Qiao, Yu, Tao, Dacheng

论文摘要

人类对象相互作用(HOI)检测旨在将人与物体之间的关系定位和推断在图像中。这是具有挑战性的,因为对象和动词类型的大量可能组合形成了长尾分布。我们设计了一个深层的视觉构图学习(VCL)框架,这是一个简单而有效的框架,可以有效解决此问题。 VCL首先将HOI表示形式分解为对象和动词特定特征,然后通过缝制分解功能在特征空间中组成新的交互样本。分解和组成的集成使VCL能够在不同的HOI样本和图像之间共享对象和动词特征,并生成新的相互作用样本和新型HOI,因此很大程度上可以减轻长尾分布问题,并使低射击或零发出的HOI检测受益。广泛的实验表明,所提出的VCL可以有效地改善HICO-DET和V-Coco上HOI检测的概括,并胜过最新的HICO-DET方法。代码可在https://github.com/zhihou7/vcl上找到。

Human-Object interaction (HOI) detection aims to localize and infer relationships between human and objects in an image. It is challenging because an enormous number of possible combinations of objects and verbs types forms a long-tail distribution. We devise a deep Visual Compositional Learning (VCL) framework, which is a simple yet efficient framework to effectively address this problem. VCL first decomposes an HOI representation into object and verb specific features, and then composes new interaction samples in the feature space via stitching the decomposed features. The integration of decomposition and composition enables VCL to share object and verb features among different HOI samples and images, and to generate new interaction samples and new types of HOI, and thus largely alleviates the long-tail distribution problem and benefits low-shot or zero-shot HOI detection. Extensive experiments demonstrate that the proposed VCL can effectively improve the generalization of HOI detection on HICO-DET and V-COCO and outperforms the recent state-of-the-art methods on HICO-DET. Code is available at https://github.com/zhihou7/VCL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源