论文标题

形状适配器:可学习的调整大小模块

Shape Adaptor: A Learnable Resizing Module

论文作者

Liu, Shikun, Lin, Zhe, Wang, Yilin, Zhang, Jianming, Perazzi, Federico, Johns, Edward

论文摘要

我们为神经网络提供了一个新颖的调整大小模块:形状适配器,一种基于传统调整层顶部的液位增强功能,例如合并,双线性采样和稳定的卷积。尽管传统的调整层具有固定和确定性的重塑因素,但我们的模块允许可学习的重塑因子。我们的实现使形状适配器可以端到端训练,而无需任何其他监督,可以通过完全自动化的方式为每个任务优化网络体系结构。我们对七个图像分类数据集进行了实验,结果表明,通过简单地使用一组形状适配器而不是原始的调整层,在所有数据集上,性能在人体设计的网络上持续提高。此外,我们还显示了形状适配器对其他两个应用程序的有效性:网络压缩和传输学习。源代码可在以下网址提供:https://github.com/lorenmt/shape-apaptor。

We present a novel resizing module for neural networks: shape adaptor, a drop-in enhancement built on top of traditional resizing layers, such as pooling, bilinear sampling, and strided convolution. Whilst traditional resizing layers have fixed and deterministic reshaping factors, our module allows for a learnable reshaping factor. Our implementation enables shape adaptors to be trained end-to-end without any additional supervision, through which network architectures can be optimised for each individual task, in a fully automated way. We performed experiments across seven image classification datasets, and results show that by simply using a set of our shape adaptors instead of the original resizing layers, performance increases consistently over human-designed networks, across all datasets. Additionally, we show the effectiveness of shape adaptors on two other applications: network compression and transfer learning. The source code is available at: https://github.com/lorenmt/shape-adaptor.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源