论文标题

回到未来:自我监督对比的视频表示的循环编码预测学习

Back to the Future: Cycle Encoding Prediction for Self-supervised Contrastive Video Representation Learning

论文作者

Yang, Xinyu, Mirmehdi, Majid, Burghardt, Tilo

论文摘要

在本文中,我们表明,学习视频特征空间,其中时间周期是最大预测的好处动作分类。特别是,我们提出了一种新的学习方法,称为编码预测的循环(CEP),该方法能够有效地代表未标记视频内容的高级时空结构。 CEP建立了一个潜在空间,其中大约保留了封闭的前向后和向后的颞循环的概念。作为一个自主的信号,CEP利用视频流的双向时间连贯性,并应用损失函数,鼓励时间循环闭合以及对比特征分离。在架构上,基础网络结构使用了所有视频片段的单个功能编码器,并添加了两个预测性模块,这些模块可以学习时间向前和向后过渡。我们将我们的框架应用于网络训练的借口培训,以进行行动识别任务。我们报告了标准数据集UCF101和HMDB51的结果大大改善。详细的消融研究支持所提出的组件的有效性。我们在本文中完整发布了CEP组件的源代码。

In this paper we show that learning video feature spaces in which temporal cycles are maximally predictable benefits action classification. In particular, we propose a novel learning approach termed Cycle Encoding Prediction (CEP) that is able to effectively represent high-level spatio-temporal structure of unlabelled video content. CEP builds a latent space wherein the concept of closed forward-backward as well as backward-forward temporal loops is approximately preserved. As a self-supervision signal, CEP leverages the bi-directional temporal coherence of the video stream and applies loss functions that encourage both temporal cycle closure as well as contrastive feature separation. Architecturally, the underpinning network structure utilises a single feature encoder for all video snippets, adding two predictive modules that learn temporal forward and backward transitions. We apply our framework for pretext training of networks for action recognition tasks. We report significantly improved results for the standard datasets UCF101 and HMDB51. Detailed ablation studies support the effectiveness of the proposed components. We publish source code for the CEP components in full with this paper.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源