论文标题

PAMS:通过参数化最大量表进行量化的超分辨率

PAMS: Quantized Super-Resolution via Parameterized Max Scale

论文作者

Li, Huixia, Yan, Chenqian, Lin, Shaohui, Zheng, Xiawu, Li, Yuchao, Zhang, Baochang, Yang, Fan, Ji, Rongrong

论文摘要

深度卷积神经网络(DCNN)在超分辨率(SR)的任务中表现出主导性能。但是,它们的沉重内存成本和计算开销显着限制了它们在资源有限的设备上的实际部署,这主要源于浮点存储以及权重和激活之间的操作。尽管以前的努力主要诉诸于固定点操作,但是用固定编码长度量化权重和激活可能会导致绩效显着下降,尤其是在低位上。具体而言,大多数没有批处理标准化的最先进的SR模型具有较大的动态量化范围,这也是性能下降的另一个原因。为了解决这两个问题,我们提出了一种称为参数化最大量表(PAM)的新量化方案,该方案应用了可训练的截断参数,以自适应地探索量化范围的上限。最后,引入结构化知识转移(SKT)损失以微调量化网络。广泛的实验表明,所提出的PAMS方案可以很好地压缩和加速现有的SR模型,例如EDSR和RDN。值得注意的是,8位PAMS-EDSR将SET5基准的PSNR从32.095db提高到32.124db,并以2.42 $ \ times $ $压缩率提高了新的最新时间。

Deep convolutional neural networks (DCNNs) have shown dominant performance in the task of super-resolution (SR). However, their heavy memory cost and computation overhead significantly restrict their practical deployments on resource-limited devices, which mainly arise from the floating-point storage and operations between weights and activations. Although previous endeavors mainly resort to fixed-point operations, quantizing both weights and activations with fixed coding lengths may cause significant performance drop, especially on low bits. Specifically, most state-of-the-art SR models without batch normalization have a large dynamic quantization range, which also serves as another cause of performance drop. To address these two issues, we propose a new quantization scheme termed PArameterized Max Scale (PAMS), which applies the trainable truncated parameter to explore the upper bound of the quantization range adaptively. Finally, a structured knowledge transfer (SKT) loss is introduced to fine-tune the quantized network. Extensive experiments demonstrate that the proposed PAMS scheme can well compress and accelerate the existing SR models such as EDSR and RDN. Notably, 8-bit PAMS-EDSR improves PSNR on Set5 benchmark from 32.095dB to 32.124dB with 2.42$\times$ compression ratio, which achieves a new state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源