论文标题

阈值:使用阈值机制的有效登记性来减少连接

ThreshNet: An Efficient DenseNet Using Threshold Mechanism to Reduce Connections

论文作者

Ju, Rui-Yang, Lin, Ting-Yu, Jian, Jia-Hao, Chiang, Jen-Shiun, Yang, Wei-Bin

论文摘要

随着用于计算机视觉任务的神经网络的持续发展,越来越多的网络体系结构取得了杰出的成功。作为最先进的神经网络体系结构之一,densenet快捷方式都具有所有图表来解决模型深度问题。尽管该网络体系结构具有较低的参数精度,但需要过多的推理时间。为了解决这个问题,硬核会降低特征图之间的连接,从而使剩余的连接类似于谐波波。但是,这种压缩方法可能导致模型准确性降低,并且参数和模型大小的增加。该网络体系结构可能会减少内存访问时间,但仍可以提高其整体性能。因此,我们使用阈值机制提出了一个新的网络体系结构Threshnet,以进一步优化连接方法。丢弃不同卷积层的不同数量的连接以加速网络的推断。在NVIDIA RTX 3050和Raspberry Pi 4下使用CIFAR 10和SVHN数据集评估了所提出的网络。实验结果表明,与Hardnet68,GhostNet68,GhostNet,Mobilenetv2,ShuffLenet和Extficednet相比 分别。阈值95的参数数量比HardNet85的参数少55%。新的模型压缩和模型加速方法可以加快推理时间,从而使网络模型能够在移动设备上运行。

With the continuous development of neural networks for computer vision tasks, more and more network architectures have achieved outstanding success. As one of the most advanced neural network architectures, DenseNet shortcuts all feature maps to solve the model depth problem. Although this network architecture has excellent accuracy with low parameters, it requires an excessive inference time. To solve this problem, HarDNet reduces the connections between the feature maps, making the remaining connections resemble harmonic waves. However, this compression method may result in a decrease in the model accuracy and an increase in the parameters and model size. This network architecture may reduce the memory access time, but its overall performance can still be improved. Therefore, we propose a new network architecture, ThreshNet, using a threshold mechanism to further optimize the connection method. Different numbers of connections for different convolution layers are discarded to accelerate the inference of the network. The proposed network has been evaluated with image classification using CIFAR 10 and SVHN datasets under platforms of NVIDIA RTX 3050 and Raspberry Pi 4. The experimental results show that, compared with HarDNet68, GhostNet, MobileNetV2, ShuffleNet, and EfficientNet, the inference time of the proposed ThreshNet79 is 5%, 9%, 10%, 18%, and 20% faster, respectively. The number of parameters of ThreshNet95 is 55% less than that of HarDNet85. The new model compression and model acceleration methods can speed up the inference time, enabling network models to operate on mobile devices.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源