论文标题
eeledhuman:纹理3D人体重建的稳健形状表示
PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction
论文作者
论文摘要
我们介绍了Peeledhuman-人体的一种新型形状表示,对自我肠意非常有力。 eeledhuman将人体编码为2D中的一组剥落深度和RGB图,通过在3D身体模型上进行射线追踪并将每个射线延伸到其第一个交叉点以外。与其他表示相比,这种公式使我们能够有效地处理自我十分。鉴于单眼RGB图像,我们使用我们的新颖框架Peelgan以端到端的生成对抗性方式学习了这些剥落的地图。我们使用3D倒角损失和其他2D损失来训练Peelgan,以在双支架设置中生成每个像素的多个深度值和相应的RGB场。在我们简单的非参数解决方案中,将生成的剥离深度图回到3D空间中,以获得完整的纹理3D形状。相应的RGB地图提供顶点级纹理详细信息。我们将我们的方法与3D重建中的当前参数和非参数方法进行了比较,并发现我们实现了最先进的重点。我们证明了代表性对公开可用的Buff和MonoperFCAP数据集的有效性,以及我们校准的多重金属设置收集的松散衣服数据。
We introduce PeeledHuman - a novel shape representation of the human body that is robust to self-occlusions. PeeledHuman encodes the human body as a set of Peeled Depth and RGB maps in 2D, obtained by performing ray-tracing on the 3D body model and extending each ray beyond its first intersection. This formulation allows us to handle self-occlusions efficiently compared to other representations. Given a monocular RGB image, we learn these Peeled maps in an end-to-end generative adversarial fashion using our novel framework - PeelGAN. We train PeelGAN using a 3D Chamfer loss and other 2D losses to generate multiple depth values per-pixel and a corresponding RGB field per-vertex in a dual-branch setup. In our simple non-parametric solution, the generated Peeled Depth maps are back-projected to 3D space to obtain a complete textured 3D shape. The corresponding RGB maps provide vertex-level texture details. We compare our method with current parametric and non-parametric methods in 3D reconstruction and find that we achieve state-of-the-art-results. We demonstrate the effectiveness of our representation on publicly available BUFF and MonoPerfCap datasets as well as loose clothing data collected by our calibrated multi-Kinect setup.