论文标题
纹理:在3D形状表面上生成纹理
Texturify: Generating Textures on 3D Shape Surfaces
论文作者
论文摘要
3D对象上的纹理提示是引人入胜的视觉表示的关键,有可能在不同视图上以固有的空间一致性创建高视觉保真度。由于纹理3D形状的可用性仍然非常有限,因此学习了一种3D保护数据驱动的方法,该方法可以根据3D输入预测纹理非常具有挑战性。因此,我们提出了一种基于GAN的方法,它利用对象类的3D形状数据集,并学会通过生成高质量的纹理来重现实际图像中观察到的外观的分布。特别是,我们的方法不需要任何3D颜色监督或形状几何和图像之间的对应关系来学习3D对象的纹理。纹理化直接在3D对象的表面上操作,通过在层次的4-Rosy参数化上引入面部卷积算子来生成合理的对象特异性纹理。我们采用可区分的渲染和对抗性损失来批评个人观点和视图的一致性,我们有效地学习了从现实世界图像中的高质量表面纹理分布。汽车和椅子形状集合的实验表明,我们的方法在FID得分中平均超过22%。
Texture cues on 3D objects are key to compelling visual representations, with the possibility to create high visual fidelity with inherent spatial consistency across different views. Since the availability of textured 3D shapes remains very limited, learning a 3D-supervised data-driven method that predicts a texture based on the 3D input is very challenging. We thus propose Texturify, a GAN-based method that leverages a 3D shape dataset of an object class and learns to reproduce the distribution of appearances observed in real images by generating high-quality textures. In particular, our method does not require any 3D color supervision or correspondence between shape geometry and images to learn the texturing of 3D objects. Texturify operates directly on the surface of the 3D objects by introducing face convolutional operators on a hierarchical 4-RoSy parametrization to generate plausible object-specific textures. Employing differentiable rendering and adversarial losses that critique individual views and consistency across views, we effectively learn the high-quality surface texturing distribution from real-world images. Experiments on car and chair shape collections show that our approach outperforms state of the art by an average of 22% in FID score.