论文标题
来自单个图像的基于注意的3D对象重建
Attention-based 3D Object Reconstruction from a Single Image
论文作者
论文摘要
最近,由于其现代应用,例如3D打印机,自动驾驶机器人,自动驾驶汽车,虚拟现实和增强现实,基于2D图像的3D重建方法已获得了知名度。计算机视觉社区已经采取了巨大的努力来开发功能,以重建对象和场景的完整3D几何形状。但是,为了提取图像特征,它们依靠卷积神经网络,这些神经网络无效地捕获长期依赖性。在本文中,我们建议实质上改善占用网络,这是一种用于3D对象重建的最新方法。因此,我们将自我发挥的概念应用于网络编码器中,以利用互补的输入功能,而不是基于本地区域的功能,从而帮助编码器提取全局信息。通过我们的方法,我们能够改善5.05%的网状IOU,正常一致性的0.83%,而Chamfer-L1距离的10倍以上。我们还进行了一项定性研究,该研究表明我们的方法能够产生更加一致的网格,从而证实了其对当前最新技术的概括能力的增加。
Recently, learning-based approaches for 3D reconstruction from 2D images have gained popularity due to its modern applications, e.g., 3D printers, autonomous robots, self-driving cars, virtual reality, and augmented reality. The computer vision community has applied a great effort in developing functions to reconstruct the full 3D geometry of objects and scenes. However, to extract image features, they rely on convolutional neural networks, which are ineffective in capturing long-range dependencies. In this paper, we propose to substantially improve Occupancy Networks, a state-of-the-art method for 3D object reconstruction. For such we apply the concept of self-attention within the network's encoder in order to leverage complementary input features rather than those based on local regions, helping the encoder to extract global information. With our approach, we were capable of improving the original work in 5.05% of mesh IoU, 0.83% of Normal Consistency, and more than 10X the Chamfer-L1 distance. We also perform a qualitative study that shows that our approach was able to generate much more consistent meshes, confirming its increased generalization power over the current state-of-the-art.