论文标题

基于神经网格的图形

Neural Mesh-Based Graphics

论文作者

Jena, Shubhendu, Multon, Franck, Boukhayma, Adnane

论文摘要

我们重新审视NPBG,这是一种流行的新型视图合成方法,引入了无处不在的点神经渲染范式。我们对具有快速视图合成的数据效率学习特别感兴趣。除了前景/背景场景渲染分裂以及改善的损失外,我们还通过基于视图的网格点描述符栅格化来实现这一目标。通过仅在单个场景上训练,我们的表现就超过了在扫描仪上接受过培训的NPBG,然后进行了固定场景。我们还针对最先进的方法SVS进行了竞争性,该方法已在完整的数据集(DTU,坦克和寺庙)上进行了培训,然后将场景进行了填补,尽管它们具有更深的神经渲染器。

We revisit NPBG, the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-dependent mesh-based denser point descriptor rasterization, in addition to a foreground/background scene rendering split, and an improved loss. By training solely on a single scene, we outperform NPBG, which has been trained on ScanNet and then scene finetuned. We also perform competitively with respect to the state-of-the-art method SVS, which has been trained on the full dataset (DTU and Tanks and Temples) and then scene finetuned, in spite of their deeper neural renderer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源