论文标题

通过尖峰神经网络不断发展的学习增强学习任务

Evolving-to-Learn Reinforcement Learning Tasks with Spiking Neural Networks

论文作者

Lu, J., Hagenaars, J. J., de Croon, G. C. H. E.

论文摘要

受自然神经系统的启发,突触可塑性规则被应用于培训尖峰神经网络,并具有本地信息,使其适合在线学习神经形态硬件。但是,当实施此类规则以学习不同的新任务时,它们通常需要大量与任务有关的微调工作。本文旨在通过采用一种进化算法来更轻松,该算法将合适的突触可塑性规则用于手头的任务。更具体地说,我们提供一组各种本地信号,一组数学运算符和全球奖励信号,然后笛卡尔遗传编程过程从这些组件中找到了最佳的学习规则。使用这种方法,我们找到了成功解决XOR和Cart-Pole任务的学习规则,并发现新的学习规则,以优于文献中的基线规则。

Inspired by the natural nervous system, synaptic plasticity rules are applied to train spiking neural networks with local information, making them suitable for online learning on neuromorphic hardware. However, when such rules are implemented to learn different new tasks, they usually require a significant amount of work on task-dependent fine-tuning. This paper aims to make this process easier by employing an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand. More specifically, we provide a set of various local signals, a set of mathematical operators, and a global reward signal, after which a Cartesian genetic programming process finds an optimal learning rule from these components. Using this approach, we find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源