论文标题
金字塔点:用于重新访问特征层的多层次聚焦网络
Pyramid Point: A Multi-Level Focusing Network for Revisiting Feature Layers
论文作者
论文摘要
我们提出了一种方法,可以从无序的点集中学习各种对象类别。我们提出了我们的金字塔点网络,该网络使用密集的金字塔结构,而不是传统的“ U”形状,通常在语义分割网络中可见。这种金字塔结构可让您进行第二次查看,从而使网络可以同时重新审视不同的层,从而通过创建更少的噪声来增加上下文信息。我们介绍了集中的内核点卷积(FKP Conv),该卷积通过在内核输出中添加注意力机制来扩展传统点卷积。此FKP Conv提高了我们的功能质量,并使我们能够动态权衡内核输出。这些FKP COLVS是我们经常性FKP瓶颈块的中心部分,它构成了我们编码器的骨干。有了这个独特的网络,我们在三个基准数据集上展示了竞争性能。我们还进行了一项消融研究,以显示我们FKP Conver中每个元素的积极影响。
We present a method to learn a diverse group of object categories from an unordered point set. We propose our Pyramid Point network, which uses a dense pyramid structure instead of the traditional 'U' shape, typically seen in semantic segmentation networks. This pyramid structure gives a second look, allowing the network to revisit different layers simultaneously, increasing the contextual information by creating additional layers with less noise. We introduce a Focused Kernel Point convolution (FKP Conv), which expands on the traditional point convolutions by adding an attention mechanism to the kernel outputs. This FKP Conv increases our feature quality and allows us to weigh the kernel outputs dynamically. These FKP Convs are the central part of our Recurrent FKP Bottleneck block, which makes up the backbone of our encoder. With this distinct network, we demonstrate competitive performance on three benchmark data sets. We also perform an ablation study to show the positive effects of each element in our FKP Conv.