论文标题
统一的重量初始化范式范式范围卷积神经网络
A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks
论文作者
论文摘要
张力卷积神经网络(TCNN)在减少模型参数或增强概括能力方面引起了很多研究的关注。但是,即使体重初始化方法也阻碍了TCNN的探索。要具体,一般的初始化方法(例如Xavier或Kaiming初始化)通常无法为TCNN产生适当的权重。同时,尽管针对特定架构(例如张量环网)采用了临时方法,但它们不适用于具有其他张量分解方法(例如CP或Tucker分解)的TCNN。为了解决这个问题,我们提出了一个通用的权重初始化范式,该范式概括了Xavier和Kaiming方法,并且可以广泛适用于任意TCNN。具体而言,我们首先介绍重现转换,以将TCNN中的向后过程转换为等效的卷积过程。然后,基于向前和向后过程中的卷积运算符,我们构建了一个统一的范式,以控制TCNN中特征和梯度的方差。因此,我们可以为各种TCNN得出风扇和粉丝的初始化。我们证明我们的范式可以稳定TCNN的训练,从而导致更快的收敛性和更好的结果。
Tensorial Convolutional Neural Networks (TCNNs) have attracted much research attention for their power in reducing model parameters or enhancing the generalization ability. However, exploration of TCNNs is hindered even from weight initialization methods. To be specific, general initialization methods, such as Xavier or Kaiming initialization, usually fail to generate appropriate weights for TCNNs. Meanwhile, although there are ad-hoc approaches for specific architectures (e.g., Tensor Ring Nets), they are not applicable to TCNNs with other tensor decomposition methods (e.g., CP or Tucker decomposition). To address this problem, we propose a universal weight initialization paradigm, which generalizes Xavier and Kaiming methods and can be widely applicable to arbitrary TCNNs. Specifically, we first present the Reproducing Transformation to convert the backward process in TCNNs to an equivalent convolution process. Then, based on the convolution operators in the forward and backward processes, we build a unified paradigm to control the variance of features and gradients in TCNNs. Thus, we can derive fan-in and fan-out initialization for various TCNNs. We demonstrate that our paradigm can stabilize the training of TCNNs, leading to faster convergence and better results.