论文标题

深层网络具有封闭形式的权重

Deep Layer-wise Networks Have Closed-Form Weights

论文作者

Wu, Chieh, Masoomi, Aria, Gretton, Arthur, Dy, Jennifer

论文摘要

目前,神经科学社区中关于大脑进行反向传播的可能性(BP)的辩论。为了更好地模仿大脑,只提出了仅提出“单个前向通道”的网络\ textit {一层}作为绕过bp的替代方案;我们将这些网络称为“层”网络。我们通过回答两个出色的问题来继续在层面网络上进行工作。首先,$ \ textIt {他们有封闭形式的解决方案吗?} $ section,$ \ textit {我们如何知道何时停止添加更多层?} $这项工作证明,这项工作证明了内核的含义是嵌入的封闭形式的重量,该封闭形式的重量是实现网络的全局最佳选择,而在驱动这些网络方面朝着高度值得良好的良好的核心进行分类;我们称其为$ \ textIt {neural指示器内核} $。

There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network \textit{one layer at a time} with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, $\textit{do they have a closed-form solution?}$ Second, $\textit{how do we know when to stop adding more layers?}$ This work proves that the Kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the $\textit{Neural Indicator Kernel}$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源