论文标题

MUCAN:视频超分辨率的多相应聚合网络

MuCAN: Multi-Correspondence Aggregation Network for Video Super-Resolution

论文作者

Li, Wenbo, Tao, Xin, Guo, Taian, Qi, Lu, Lu, Jiangbo, Jia, Jiaya

论文摘要

视频超分辨率(VSR)旨在利用多个低分辨率帧来为每个帧生成高分辨率预测。在此过程中,框架内和框内是利用时间和空间信息的关键来源。但是,现有的VSR方法有几个局限性。首先,光流通常用于建立时间对应关系。但是流量估计本身是错误的,并且会影响恢复结果。其次,自然图像中存在的类似模式很少用于VSR任务。在这些发现的激励下,我们提出了一种时间多数相对应的聚合策略,以利用跨帧的类似斑块,以及一个跨尺度的非局部非验证聚合方案,以探索跨尺度的图像的自相似性。基于这两个新模块,我们为VSR构建了一个有效的多相应聚合网络(MUCAN)。我们的方法在多个基准数据集上实现了最新的结果。广泛的实验证明了我们方法的有效性。

Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame. In this process, inter- and intra-frames are the key sources for exploiting temporal and spatial information. However, there are a couple of limitations for existing VSR methods. First, optical flow is often used to establish temporal correspondence. But flow estimation itself is error-prone and affects recovery results. Second, similar patterns existing in natural images are rarely exploited for the VSR task. Motivated by these findings, we propose a temporal multi-correspondence aggregation strategy to leverage similar patches across frames, and a cross-scale nonlocal-correspondence aggregation scheme to explore self-similarity of images across scales. Based on these two new modules, we build an effective multi-correspondence aggregation network (MuCAN) for VSR. Our method achieves state-of-the-art results on multiple benchmark datasets. Extensive experiments justify the effectiveness of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源