论文标题
通过生成注意建模弱监督的动作定位
Weakly-Supervised Action Localization by Generative Attention Modeling
论文作者
论文摘要
弱监督的时间动作本地化是学习一个只有视频级动作标签的动作定位模型的问题。一般框架在很大程度上依赖于分类激活,该分类采用注意力模型来识别与动作相关的框架,然后将它们分为不同的类别。这种方法会导致行动上下文混乱问题:附近的上下文框架倾向于将其视为动作框架本身,因为它们与特定类别密切相关。为了解决问题,在本文中,我们建议使用条件变异自动编码器(VAE)对框架注意的类别框架的概率进行建模。通过观察到上下文与表示级别的作用显着差异,学会了一个概率模型,即条件性vae,以建模每个框架的可能性。通过最大程度地提高条件性概率,相对于注意力,动作和非行动框架分开了。 Thumos14和ActivityNet1.2上的实验证明了我们在处理动作封面混淆问题方面的方法和有效性。现在可以在GitHub上获得代码。
Weakly-supervised temporal action localization is a problem of learning an action localization model with only video-level action labeling available. The general framework largely relies on the classification activation, which employs an attention model to identify the action-related frames and then categorizes them into different classes. Such method results in the action-context confusion issue: context frames near action clips tend to be recognized as action frames themselves, since they are closely related to the specific classes. To solve the problem, in this paper we propose to model the class-agnostic frame-wise probability conditioned on the frame attention using conditional Variational Auto-Encoder (VAE). With the observation that the context exhibits notable difference from the action at representation level, a probabilistic model, i.e., conditional VAE, is learned to model the likelihood of each frame given the attention. By maximizing the conditional probability with respect to the attention, the action and non-action frames are well separated. Experiments on THUMOS14 and ActivityNet1.2 demonstrate advantage of our method and effectiveness in handling action-context confusion problem. Code is now available on GitHub.