论文标题

减少事件摄像头的SIM卡之间的差距

Reducing the Sim-to-Real Gap for Event Cameras

论文作者

Stoffregen, Timo, Scheerlinck, Cedric, Scaramuzza, Davide, Drummond, Tom, Barnes, Nick, Kleeman, Lindsay, Mahony, Robert

论文摘要

事件摄像机是范式移动的新型传感器,它们报告了与无与伦比的低潜伏期的“事件”不同步,每个像素亮度变化。这使它们成为高速,高动态范围场景的理想选择,传统相机将失败。最近的工作证明了使用卷积神经网络(CNN)进行视频重建和视频流的令人印象深刻的结果。我们提出了改善基于事件的CNN培训数据的策略,这些策略可提高20-40%的促进,以提高使用我们的方法的现有最新视频重建网络(SOTA)视频重建网络,而光流网络的性能最高可达15%。评估基于事件的视频重建的挑战是现有数据集中缺乏优质基础真相图像。为了解决这个问题,我们提出了一个新的高质量框架(HQF)数据集,其中包含来自davis240c的事件和地面真相框架,这些事件和接触良好且最小的运动性。我们在HQF +几个现有主要事件摄像机数据集上评估了我们的方法。

Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called 'events' with unparalleled low latency. This makes them ideal for high speed, high dynamic range scenes where conventional cameras would fail. Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events. We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing state-of-the-art (SOTA) video reconstruction networks retrained with our method, and up to 15% for optic flow networks. A challenge in evaluating event based video reconstruction is lack of quality ground truth images in existing datasets. To address this, we present a new High Quality Frames (HQF) dataset, containing events and ground truth frames from a DAVIS240C that are well-exposed and minimally motion-blurred. We evaluate our method on HQF + several existing major event camera datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源