论文标题

放松预测编码模型的限制

Relaxing the Constraints on Predictive Coding Models

论文作者

Millidge, Beren, Tschantz, Alexander, Seth, Anil, Buckley, Christopher L

论文摘要

预测编码是一种具有影响力的皮质功能理论,它认为大脑执行的主要计算是感知和学习的基础,是预测错误的最小化。尽管由高级推理的高级概念进行了动机,但已经开发了可以实现其计算的皮质微电路的详细神经生理模型。此外,在某些条件下,已显示预测性编码近似于误差算法的反向传播,因此为训练深网提供了相对具有生物学上合理的信用分配机制。但是,该算法的标准实现仍然涉及潜在的神经难以置信的特征,例如相同的向前和向后重量,后向非线性导数和1-1误差单位连接性。在本文中,我们表明这些功能不是算法不可或缺的一部分,可以直接删除,也可以通过使用HEBBIAN更新规则学习其他参数,而不会对学习性能造成任何明显损害。因此,我们的工作放松了对潜在微电路设计的当前限制,并希望为预测编码的神经形态实现打开了设计空间的新区域。

Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors. While motivated by high-level notions of variational inference, detailed neurophysiological models of cortical microcircuits which can implements its computations have been developed. Moreover, under certain conditions, predictive coding has been shown to approximate the backpropagation of error algorithm, and thus provides a relatively biologically plausible credit-assignment mechanism for training deep networks. However, standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity. In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance. Our work thus relaxes current constraints on potential microcircuit designs and hopefully opens up new regions of the design-space for neuromorphic implementations of predictive coding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源