论文标题
深层外观预滤器
Deep Appearance Prefiltering
论文作者
论文摘要
基于物理的复杂场景渲染可以过高地昂贵,而在整个图像中的复杂性分布可能不平坦。理想的细节水平(LOD)方法的目标是使渲染成本独立于3D场景复杂性,同时保留场景的外观。但是,由于依赖近似模型和其他启发式方法,目前的预滤波LOD方法在外观上可以支持的外观有限。我们提出了第一个综合的多尺度LOD框架,用于使用复杂的几何和材料(例如迪士尼BRDF)进行预滤波3D环境,同时保持相对于射线跟踪参考的外观。使用场景的多尺度层次结构,我们执行一个数据驱动的预滤波步骤,以在每个尺度上获得外观相位函数和方向覆盖面膜。我们方法的核心是一种新颖的神经表示,将这些信息编码为紧凑的潜在形式,该形式易于在基于物理的渲染器中解码。一旦场景烤出来,我们的方法就不需要原始的几何形状,材料或纹理。我们证明,我们的方法与最先进的预滤器方法相比,并为复杂场景节省了大量资金。
Physically based rendering of complex scenes can be prohibitively costly with a potentially unbounded and uneven distribution of complexity across the rendered image. The goal of an ideal level of detail (LoD) method is to make rendering costs independent of the 3D scene complexity, while preserving the appearance of the scene. However, current prefiltering LoD methods are limited in the appearances they can support due to their reliance of approximate models and other heuristics. We propose the first comprehensive multi-scale LoD framework for prefiltering 3D environments with complex geometry and materials (e.g., the Disney BRDF), while maintaining the appearance with respect to the ray-traced reference. Using a multi-scale hierarchy of the scene, we perform a data-driven prefiltering step to obtain an appearance phase function and directional coverage mask at each scale. At the heart of our approach is a novel neural representation that encodes this information into a compact latent form that is easy to decode inside a physically based renderer. Once a scene is baked out, our method requires no original geometry, materials, or textures at render time. We demonstrate that our approach compares favorably to state-of-the-art prefiltering methods and achieves considerable savings in memory for complex scenes.