论文标题

盲目视频时间一致性通过深度视频先验

Blind Video Temporal Consistency via Deep Video Prior

论文作者

Lei, Chenyang, Xing, Yazhou, Chen, Qifeng

论文摘要

独立地应用图像处理算法在每个视频框架上通常会导致结果视频中的时间不一致。为了解决这个问题,我们提出了一种新颖的盲目视频时间一致性方法。我们的方法仅在直接的一对原始和处理的视频上而不是大型数据集进行培训。与大多数以前可以通过光流执行时间一致性的方法不同,我们表明可以通过在带有深层视频的视频上训练卷积网络来实现时间一致性。此外,提出了经过精心设计的迭代重新加权培训策略,以解决具有挑战性的多模式不一致问题。我们证明了方法对视频的7个计算机视觉任务的有效性。广泛的定量和感知实验表明,我们的方法比盲目视频时间一致性的最先进方法获得了更高的性能。我们的源代码可在github.com/chenyanglei/deep-video-prior上公开获得。

Applying image processing algorithms independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior. Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. Our source codes are publicly available at github.com/ChenyangLEI/deep-video-prior.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源