论文标题

可通话CNN的隐式卷积内核

Implicit Convolutional Kernels for Steerable CNNs

论文作者

Zhdanov, Maksim, Hoffmann, Nico, Cesa, Gabriele

论文摘要

可进入的卷积神经网络(CNNS)为建立神经网络的构建提供了一个通用框架,该框架等效于构建原始$ g $的翻译和转换,例如反射和旋转。他们依靠标准的卷积,通过分析解决对内核空间的群体特异性等效性约束,获得了$ g $驱动的内核。由于该解决方案是针对特定组$ g $的,因此实现内核基础并不能推广到其他对称转换,从而使一般组模型的开发变得复杂。我们建议通过多层感知器(MLP)使用隐式神经表示,以参数化$ g $ - 驱动的内核。最终的框架提供了一种简单而灵活的方法,可以实现可可的CNN并推广到可以为其建立$ G $ equivariant MLP的任何组$ g $。我们证明了我们的方法对多个任务的有效性,包括N体模拟,点云分类和分子属性预测。

Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and transformations of an origin-preserving group $G$, such as reflections and rotations. They rely on standard convolutions with $G$-steerable kernels obtained by analytically solving the group-specific equivariance constraint imposed onto the kernel space. As the solution is tailored to a particular group $G$, implementing a kernel basis does not generalize to other symmetry transformations, complicating the development of general group equivariant models. We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize $G$-steerable kernels. The resulting framework offers a simple and flexible way to implement Steerable CNNs and generalizes to any group $G$ for which a $G$-equivariant MLP can be built. We prove the effectiveness of our method on multiple tasks, including N-body simulations, point cloud classification and molecular property prediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源