论文标题
生成视频重新定位的邻接矩阵
Generating Adjacency Matrix for Video Relocalization
论文作者
论文摘要
在本文中,我们继续进行视频重新定位任务。基于使用图形卷积来提取视频内和视频间框架特征,我们通过使用基于相似性的图形卷积来改进该方法,其加权邻接矩阵是通过计算图中任意两个不同时间步之间的相似性度量来实现的。 ActivityNet V1.2和Thumos14数据集的实验显示了此改进的有效性,并且表现优于最新方法。
In this paper, we continue our work on video relocalization task. Based on using graph convolution to extract intra-video and inter-video frame features, we improve the method by using similarity-metric based graph convolution, whose weighted adjacency matrix is achieved by calculating similarity metric between features of any two different time steps in the graph. Experiments on ActivityNet v1.2 and Thumos14 dataset show the effectiveness of this improvement, and it outperforms the state-of-the-art methods.