论文标题

深度微分逻辑门网络

Deep Differentiable Logic Gate Networks

论文作者

Petersen, Felix, Borgelt, Christian, Kuehne, Hilde, Deussen, Oliver

论文摘要

最近,研究越来越集中于开发有效的神经网络体系结构。在这项工作中,我们通过学习逻辑门的组合来探索机器学习任务的逻辑门网络。这些网络包含逻辑门,例如“和”和“ XOR”,可以快速执行。学习逻辑门网络的困难在于它们在常规上是不可差的,因此不允许梯度下降训练。因此,为了进行有效的培训,我们提出了可区分的逻辑门网络,这是一种结合了实价逻辑和网络的连续参数化放松的体系结构。由此产生的离散逻辑门网络实现了快速的推理速度,例如,在单个CPU核心上,每秒的MNIST图像超过一百万张图像。

Recently, research has increasingly focused on developing efficient neural network architectures. In this work, we explore logic gate networks for machine learning tasks by learning combinations of logic gates. These networks comprise logic gates such as "AND" and "XOR", which allow for very fast execution. The difficulty in learning logic gate networks is that they are conventionally non-differentiable and therefore do not allow training with gradient descent. Thus, to allow for effective training, we propose differentiable logic gate networks, an architecture that combines real-valued logics and a continuously parameterized relaxation of the network. The resulting discretized logic gate networks achieve fast inference speeds, e.g., beyond a million images of MNIST per second on a single CPU core.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源