论文标题

CUF:连续提高过采样过滤器

CUF: Continuous Upsampling Filters

论文作者

Vasconcelos, Cristina, Oztireli, Cengiz, Matthews, Mark, Hashemi, Milad, Swersky, Kevin, Tagliasacchi, Andrea

论文摘要

神经场迅速用于表示3D信号,但是它们在更古典的2D图像处理中的应用相对有限。在本文中,我们考虑了图像处理中最重要的操作之一:UPS采样。在深度学习中,可学习的上采样层已被广泛用于单像超分辨率。我们建议将上采样内核作为神经场进行参数化。与竞争任意规模的超级分辨率体系结构相比,这种参数化导致了一个紧凑的体系结构,该架构的参数数量减少了40倍。当提高大小256x256的图像时,我们表明我们的体系结构比竞争的任意规模的超分辨率体系结构高2x-10x,并且在实例化为单尺度模型时,架构比子像素卷积更有效。在一般环境中,这些收益与目标尺度的平方多一级生长。我们验证了我们的标准基准测试方法,显示出无需在超分辨率绩效的情况下就可以实现这种效率的提高。

Neural fields have rapidly been adopted for representing 3D signals, but their application to more classical 2D image-processing has been relatively limited. In this paper, we consider one of the most important operations in image processing: upsampling. In deep learning, learnable upsampling layers have extensively been used for single image super-resolution. We propose to parameterize upsampling kernels as neural fields. This parameterization leads to a compact architecture that obtains a 40-fold reduction in the number of parameters when compared with competing arbitrary-scale super-resolution architectures. When upsampling images of size 256x256 we show that our architecture is 2x-10x more efficient than competing arbitrary-scale super-resolution architectures, and more efficient than sub-pixel convolutions when instantiated to a single-scale model. In the general setting, these gains grow polynomially with the square of the target scale. We validate our method on standard benchmarks showing such efficiency gains can be achieved without sacrifices in super-resolution performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源