论文标题
深度反射率量:可靠的多视图光度图像的可靠重建
Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images
论文作者
论文摘要
我们提出了一种深度学习方法,可以从相互关注点照明下捕获的非结构化图像重建场景外观。深度反射率量的核心是一种新型的体积场景表示,由不透明度,表面正常和反射率素网格组成。我们提出了一个基于物理的新颖的可区分体积射线行进框架,以在任意观点和照明下呈现这些场景。这使我们能够优化场景量,以最大程度地减少其渲染图像和捕获的图像之间的误差。我们的方法能够以挑战性的非lambertian反射率和复杂的几何形状和阴影来重建真实场景。此外,它准确地概括为新的观点和照明,包括非集中照明,呈现出逼真的图像,这些图像明显优于基于最新网格的方法。我们还表明,我们学到的反射率量是可编辑的,可以修改被捕获的场景的材料。
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting. At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids. We present a novel physically-based differentiable volume ray marching framework to render these scene volumes under arbitrary viewpoint and lighting. This allows us to optimize the scene volumes to minimize the error between their rendered images and the captured images. Our method is able to reconstruct real scenes with challenging non-Lambertian reflectance and complex geometry with occlusions and shadowing. Moreover, it accurately generalizes to novel viewpoints and lighting, including non-collocated lighting, rendering photorealistic images that are significantly better than state-of-the-art mesh-based methods. We also show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.