论文标题
关于深度学习模型及其内部表示的对称性
On the Symmetries of Deep Learning Models and their Internal Representations
论文作者
论文摘要
对称是探索广泛复杂系统的基本工具。在机器学习中,在模型和数据中都探索了对称性。在本文中,我们试图将模型家族架构引起的对称性与该家族的内部数据表示的对称性联系起来。我们通过计算一组基本的对称组来做到这一点,我们称之为模型的跨区域组。我们通过一系列实验将Intertwiner组连接到模型的数据内部表示,这些实验在具有相同体系结构的模型之间探测隐藏状态之间的相似性。我们的工作表明,网络的对称性在该网络的数据表示中传播到对称性中,从而使我们更好地了解体系结构如何影响学习和预测过程。最后,我们推测,对于Relu网络,Intertwiner组可以为在隐藏层中的激活基础而不是任意线性组合的激活基础上集中模型可解释性探索的共同实践提供理由。
Symmetry is a fundamental tool in the exploration of a broad range of complex systems. In machine learning symmetry has been explored in both models and data. In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data. We do this by calculating a set of fundamental symmetry groups, which we call the intertwiner groups of the model. We connect intertwiner groups to a model's internal representations of data through a range of experiments that probe similarities between hidden states across models with the same architecture. Our work suggests that the symmetries of a network are propagated into the symmetries in that network's representation of data, providing us with a better understanding of how architecture affects the learning and prediction process. Finally, we speculate that for ReLU networks, the intertwiner groups may provide a justification for the common practice of concentrating model interpretability exploration on the activation basis in hidden layers rather than arbitrary linear combinations thereof.