论文标题
学习神经光运输
Learning Neural Light Transport
论文作者
论文摘要
近年来,由于其能够通过虚拟现实到训练计算机视觉模型的数据扩展,具有综合自然图像的能力,因此具有深层生成模型的意义。尽管现有模型能够忠实地学习训练集的图像分布,但在2D像素空间中运行时,它们通常缺乏可控性,并且不会对物理图像形成过程进行建模。在这项工作中,我们调查了3D推理对影视性渲染的重要性。我们提出了一种使用神经网络在静态和动态3D场景中学习光传输的方法,其目的是预测逼真的图像。与在2D图像域中运行的现有方法相反,我们在3D和2D空间中的方法原因,从而实现了3D场景几何形状的全球照明效果和操纵。在实验上,我们发现我们的模型能够产生静态和动态场景的影像现实主义渲染。此外,它与在相同的计算预算中结合了路径追踪和图像降解的基线相比。
In recent years, deep generative models have gained significance due to their ability to synthesize natural-looking images with applications ranging from virtual reality to data augmentation for training computer vision models. While existing models are able to faithfully learn the image distribution of the training set, they often lack controllability as they operate in 2D pixel space and do not model the physical image formation process. In this work, we investigate the importance of 3D reasoning for photorealistic rendering. We present an approach for learning light transport in static and dynamic 3D scenes using a neural network with the goal of predicting photorealistic images. In contrast to existing approaches that operate in the 2D image domain, our approach reasons in both 3D and 2D space, thus enabling global illumination effects and manipulation of 3D scene geometry. Experimentally, we find that our model is able to produce photorealistic renderings of static and dynamic scenes. Moreover, it compares favorably to baselines which combine path tracing and image denoising at the same computational budget.