论文标题
异步事件如何编码视频
How Asynchronous Events Encode Video
论文作者
论文摘要
随着基于事件的感知的普及,需要理论理解来利用这项技术的潜力。基于事件的摄像机没有通过捕获帧来录制视频,而是在输入变化时会发出事件,从而在事件时间安排信息时会发出事件。这在建立重建保证和算法方面构成了新的挑战,但比基于框架的视频提供了优势。我们使用时间编码计算机来建模基于事件的传感器:TEMS还通过发射以时间和重建为特征的事件来编码其输入,从时间编码中得到充分了解。我们考虑了编码带限制视频的时间案例,并证明了空间传感器密度与整体空间和时间分辨率之间的依赖性。这种依赖性在基于帧的视频中不会发生,在基于帧的视频中,时间分辨率仅取决于视频的帧速率和空间分辨率仅取决于像素网格。但是,这种依赖性在基于事件的视频中自然产生,并允许在太空中进行过度采样以提供更好的时间分辨率。因此,基于事件的愿景鼓励使用更多传感器随着时间的推移发出更少的事件。
As event-based sensing gains in popularity, theoretical understanding is needed to harness this technology's potential. Instead of recording video by capturing frames, event-based cameras have sensors that emit events when their inputs change, thus encoding information in the timing of events. This creates new challenges in establishing reconstruction guarantees and algorithms, but also provides advantages over frame-based video. We use time encoding machines to model event-based sensors: TEMs also encode their inputs by emitting events characterized by their timing and reconstruction from time encodings is well understood. We consider the case of time encoding bandlimited video and demonstrate a dependence between spatial sensor density and overall spatial and temporal resolution. Such a dependence does not occur in frame-based video, where temporal resolution depends solely on the frame rate of the video and spatial resolution depends solely on the pixel grid. However, this dependence arises naturally in event-based video and allows oversampling in space to provide better time resolution. As such, event-based vision encourages using more sensors that emit fewer events over time.