论文标题
RANP:3D CNN初始化时的资源意识到神经元修剪
RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs
论文作者
论文摘要
尽管3D卷积神经网络(CNN)对于大多数涉及密集3D数据的基于学习的应用程序至关重要,但由于内存过多和计算要求,其适用性受到限制。因此,通过修剪来压缩此类网络变得非常可取。然而,修剪3D CNN在很大程度上可能没有探索,这可能是由于典型的修剪算法的复杂性质,该算法将修剪嵌入迭代优化范式中。在这项工作中,我们介绍了一种资源意识到的神经元修剪(RANP)算法,该算法在初始化时将3D CNN降低到高稀疏度。具体而言,核心思想是根据每个神经元对损耗函数的敏感性获得一个重要性评分。然后,根据与失败或记忆相关的神经元资源消耗重新恢复这种神经元的重要性。我们在Shapenet和Brats'18上广泛使用的3D-UNET以及在UCF101数据集中使用MobilenetV2和I3D进行了广泛使用的3D-UNET以及在UCF101数据集上的视频分类中证明了您的修剪方法对3D语义分割的有效性。在这些实验中,与未经修复的网络相比,我们的RANP大约导致拖失板的减少约50-95,而准确性降低35-80,而准确性的损失可忽略不计。这大大减少了培训3D CNN所需的计算资源。我们的算法获得的修剪网络也可以轻松扩展并转移到另一个数据集进行培训。
Although 3D Convolutional Neural Networks (CNNs) are essential for most learning based applications involving dense 3D data, their applicability is limited due to excessive memory and computational requirements. Compressing such networks by pruning therefore becomes highly desirable. However, pruning 3D CNNs is largely unexplored possibly because of the complex nature of typical pruning algorithms that embeds pruning into an iterative optimization paradigm. In this work, we introduce a Resource Aware Neuron Pruning (RANP) algorithm that prunes 3D CNNs at initialization to high sparsity levels. Specifically, the core idea is to obtain an importance score for each neuron based on their sensitivity to the loss function. This neuron importance is then reweighted according to the neuron resource consumption related to FLOPs or memory. We demonstrate the effectiveness of our pruning method on 3D semantic segmentation with widely used 3D-UNets on ShapeNet and BraTS'18 as well as on video classification with MobileNetV2 and I3D on UCF101 dataset. In these experiments, our RANP leads to roughly 50-95 reduction in FLOPs and 35-80 reduction in memory with negligible loss in accuracy compared to the unpruned networks. This significantly reduces the computational resources required to train 3D CNNs. The pruned network obtained by our algorithm can also be easily scaled up and transferred to another dataset for training.