论文标题

PIX2SURF:从图像学习参数3D对象的表面模型

Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images

论文作者

Lei, Jiahui, Sridhar, Srinath, Guerrero, Paul, Sung, Minhyuk, Mitra, Niloy, Guibas, Leonidas J.

论文摘要

从一个或多个视图中可以看出,我们研究了学习为新颖对象实例生成3D参数表面表示的问题。从多个视图中学习形状重建的先前工作使用离散表示,例如点云或体素,而连续的表面生成方法缺乏多视图一致性。我们通过设计能够产生高质量参数3D表面的神经网络来解决这些问题,这些表面在观点之间也是一致的。此外,生成的3D表面将准确的图像像素保留到3D表面点对应关系,从而使我们可以提起纹理信息以重建具有丰富几何形状和外观的形状。我们的方法在公共对象类别的公共数据集上进行了监督和培训。定量结果表明,我们的方法明显优于先前的工作,而定性结果证明了我们的重建质量高。

We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views. Previous work on learning shape reconstruction from multiple views uses discrete representations such as point clouds or voxels, while continuous surface generation approaches lack multi-view consistency. We address these issues by designing neural networks capable of generating high-quality parametric 3D surfaces which are also consistent between views. Furthermore, the generated 3D surfaces preserve accurate image pixel to 3D surface point correspondences, allowing us to lift texture information to reconstruct shapes with rich geometry and appearance. Our method is supervised and trained on a public dataset of shapes from common object categories. Quantitative results indicate that our method significantly outperforms previous work, while qualitative results demonstrate the high quality of our reconstructions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源