论文标题

E-NERF:移动活动摄像机的神经辐射场

E-NeRF: Neural Radiance Fields from a Moving Event Camera

论文作者

Klenk, Simon, Koestler, Lukas, Scaramuzza, Davide, Cremers, Daniel

论文摘要

从“理想”图像中估算神经辐射场(NERF)已在计算机视觉社区中进行了广泛的研究。大多数方法都采用最佳照明和缓慢的相机运动。这些假设通常在机器人应用中违反,其中图像可能包含运动模糊,场景可能没有合适的照明。这可能会给下游任务(例如导航,检查或可视化场景)带来重大问题。为了减轻这些问题,我们提出了E-NERF,这是第一种方法,它以快速移动的事件摄像机的形式估算了体积场景表示形式。我们的方法可以在非常快速的运动中和在基于框架的方法失败的高动力范围内恢复NERF。我们证明,仅提供事件流作为输入,可以渲染高质量的帧。此外,通过结合事件和框架,我们可以在严重的运动模糊下估计比最先进的方法更高的质量。我们还表明,将事件和帧组合可以克服在仅几个输入视图的情况下,无需额外正则化的情况下,可以克服NERF估计的故障案例。

Estimating neural radiance fields (NeRFs) from "ideal" images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images may contain motion blur, and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection, or visualization of the scene. To alleviate these problems, we present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera. Our method can recover NeRFs during very fast motion and in high-dynamic-range conditions where frame-based approaches fail. We show that rendering high-quality frames is possible by only providing an event stream as input. Furthermore, by combining events and frames, we can estimate NeRFs of higher quality than state-of-the-art approaches under severe motion blur. We also show that combining events and frames can overcome failure cases of NeRF estimation in scenarios where only a few input views are available without requiring additional regularization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源