论文标题
在特征空间中,很少有“零级设置”的形状签名距离功能学习
Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in Feature Space
论文作者
论文摘要
我们基于最近普及的隐式神经形状表示,探索了从点云进行基于学习形状重建的新想法。我们将这个问题作为对隐式神经签名距离功能的几次学习,我们使用基于梯度的元学习来接近。我们使用卷积编码器在给定输入点云的情况下构建特征空间。隐式解码器学会了预测此特征空间中表示的签名距离值。设置输入点云,即从目标形状函数的零级别设置中的样本,作为支持(即上下文)的少数学习术语,我们训练解码器,使其可以通过几(5)个调谐步骤将其权重适应此上下文的基础形状。因此,我们首次同时结合了两种类型的隐式神经网络调节机制,即具有编码和元学习。我们的数值和定性评估表明,在从稀疏点云中隐含重建的背景下,我们提出的策略,即在特征空间中的元学习,优于现有的替代方案,即特征空间中的标准监督学习,以及在欧几里得领域中的元学习,同时仍提供快速推断。
We explore a new idea for learning based shape reconstruction from a point cloud, based on the recently popularized implicit neural shape representations. We cast the problem as a few-shot learning of implicit neural signed distance functions in feature space, that we approach using gradient based meta-learning. We use a convolutional encoder to build a feature space given the input point cloud. An implicit decoder learns to predict signed distance values given points represented in this feature space. Setting the input point cloud, i.e. samples from the target shape function's zero level set, as the support (i.e. context) in few-shot learning terms, we train the decoder such that it can adapt its weights to the underlying shape of this context with a few (5) tuning steps. We thus combine two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning. Our numerical and qualitative evaluation shows that in the context of implicit reconstruction from a sparse point cloud, our proposed strategy, i.e. meta-learning in feature space, outperforms existing alternatives, namely standard supervised learning in feature space, and meta-learning in euclidean space, while still providing fast inference.