论文标题
基于学习的基于学习的复杂室内场景的反向渲染,并具有可区分的蒙特卡洛射线缩放
Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing
论文作者
论文摘要
室内场景通常从全球照明中表现出复杂的,空间变化的外观,使逆向呈现不利的问题。这项工作提出了一个基于学习的倒数呈现框架,其中包含了可区分的蒙特卡洛射线跟踪和重要性采样。该框架采用单个图像作为输入,以共同恢复潜在的几何形状,空间变化的照明和逼真的材料。具体而言,我们引入了一个基于物理的可区分渲染层,并带有屏幕空间射线跟踪,从而导致更逼真的镜面反射与输入照片相匹配。此外,我们创建了一个大规模的,逼真的室内场景数据集,其中具有更丰富的细节,例如复杂的家具和专用装饰。此外,我们设计了一个新型的视图外照明网络,并具有不确定性感知的细化,利用了基于超网络的神经辐射率领域,以预测输入照片视图之外的照明。通过对通用基准数据集的广泛评估,我们证明了与最先进的基线相比,我们的方法的优势呈呈渲染质量,从而实现了各种应用,例如复杂的对象插入和具有高忠诚度的材料编辑。代码和数据将在\ url {https://jingsenzhu.github.io/invrend}上提供。
Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data will be made available at \url{https://jingsenzhu.github.io/invrend}.