论文标题

COBE:叙述的教学视频中的上下文化对象嵌入

COBE: Contextualized Object Embeddings from Narrated Instructional Video

论文作者

Bertasius, Gedas, Torresani, Lorenzo

论文摘要

现实世界中的许多物体在视觉外观中经历了戏剧性的变化。例如,番茄可能是红色或绿色,切成薄片或切碎的,新鲜或油炸,液体或固体。训练单个检测器以准确识别所有这些不同状态的西红柿是具有挑战性的。另一方面,上下文提示(例如,刀,切菜板,过滤器或锅的存在)通常强烈表明该物体在场景中的显示方式。识别此类上下文提示不仅有用,不仅有助于提高对象检测的准确性或确定对象状态,还可以理解其功能属性并推断正在进行的或即将发生的人类对象相互作用。不幸的是,长期尾声,开放式分布的数据损害了一种完全监督对象状态及其上下文的方法,数据将有效地需要大量注释以捕获其所有不同形式的对象的外观。我们没有依靠手动标记的数据来完成此任务,而是提出了一个新的框架,用于从自动转录的教学视频叙述中学习上下文化对象嵌入(COBE)。我们通过训练视觉探测器来预测对象及其相关叙述的上下文化词嵌入语言来利用语言的语义和组成结构。这样可以学习根据语义语言指标与概念相关的对象表示。我们的实验表明,我们的探测器学会了预测各种各样的上下文对象信息,并且在几乎没有射击和零照片学习的环境中它非常有效。

Many objects in the real world undergo dramatic variations in visual appearance. For example, a tomato may be red or green, sliced or chopped, fresh or fried, liquid or solid. Training a single detector to accurately recognize tomatoes in all these different states is challenging. On the other hand, contextual cues (e.g., the presence of a knife, a cutting board, a strainer or a pan) are often strongly indicative of how the object appears in the scene. Recognizing such contextual cues is useful not only to improve the accuracy of object detection or to determine the state of the object, but also to understand its functional properties and to infer ongoing or upcoming human-object interactions. A fully-supervised approach to recognizing object states and their contexts in the real-world is unfortunately marred by the long-tailed, open-ended distribution of the data, which would effectively require massive amounts of annotations to capture the appearance of objects in all their different forms. Instead of relying on manually-labeled data for this task, we propose a new framework for learning Contextualized OBject Embeddings (COBE) from automatically-transcribed narrations of instructional videos. We leverage the semantic and compositional structure of language by training a visual detector to predict a contextualized word embedding of the object and its associated narration. This enables the learning of an object representation where concepts relate according to a semantic language metric. Our experiments show that our detector learns to predict a rich variety of contextual object information, and that it is highly effective in the settings of few-shot and zero-shot learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源