论文标题

图像超分辨率有效的非本地对比度关注

Efficient Non-Local Contrastive Attention for Image Super-Resolution

论文作者

Xia, Bin, Hang, Yucheng, Tian, Yapeng, Yang, Wenming, Liao, Qingmin, Zhou, Jie

论文摘要

非本地注意力(NLA)通过利用自然图像中的固有特征相关性,可以显着改善单图像超分辨率(SISR)。但是,NLA给出了嘈杂的信息,并消耗了有关输入大小的二次计算资源,从而限制了其性能和应用。在本文中,我们提出了一种新型有效的非本地对比度注意(ENLCA),以执行远程视觉建模并利用更相关的非本地特征。具体而言,ENLCA由两个部分组成,有效的非本地关注(ENLA)和稀疏聚集。 Enla采用内核方法来近似指数函数并获得线性计算复杂性。对于稀疏聚合,我们将输入乘以放大因素以关注信息特征,但是近似值的方差呈指数增加。因此,对比度学习将用于进一步的分离相关和无关紧要的特征。为了证明ENLCA的有效性,我们通过在简单的主链中添加一些模块来构建一种称为有效的非本地对比网络(ENLCN)的体系结构。广泛的实验结果表明,ENLCN在定量和定性评估方面都表现出优于最先进的方法。

Non-Local Attention (NLA) brings significant improvement for Single Image Super-Resolution (SISR) by leveraging intrinsic feature correlation in natural images. However, NLA gives noisy information large weights and consumes quadratic computation resources with respect to the input size, limiting its performance and application. In this paper, we propose a novel Efficient Non-Local Contrastive Attention (ENLCA) to perform long-range visual modeling and leverage more relevant non-local features. Specifically, ENLCA consists of two parts, Efficient Non-Local Attention (ENLA) and Sparse Aggregation. ENLA adopts the kernel method to approximate exponential function and obtains linear computation complexity. For Sparse Aggregation, we multiply inputs by an amplification factor to focus on informative features, yet the variance of approximation increases exponentially. Therefore, contrastive learning is applied to further separate relevant and irrelevant features. To demonstrate the effectiveness of ENLCA, we build an architecture called Efficient Non-Local Contrastive Network (ENLCN) by adding a few of our modules in a simple backbone. Extensive experimental results show that ENLCN reaches superior performance over state-of-the-art approaches on both quantitative and qualitative evaluations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源