论文标题

HSTR-NET:广泛监视的高时空分辨率视频生成

HSTR-Net: High Spatio-Temporal Resolution Video Generation For Wide Area Surveillance

论文作者

Suluhan, H. Umut, Ates, Hasan F., Gunturk, Bahadir K.

论文摘要

广域监视有许多应用程序,并且在观察中的对象跟踪是一项重要的任务,通常需要高时空分辨率(HSTR)视频才能获得更好的精确度。本文介绍了用于生成HSTR视频的多个视频提要的用法,作为基于参考的超级分辨率(REFSR)的扩展。一个提要以低帧速率(HSLF)以高空间分辨率捕获视频,而另一个则在同一场景中同时捕获了低空间分辨率和高帧速率(LSHF)视频。主要目的是通过HSLF和LSHF视频的融合创建HSTR视频。在本文中,我们提出了一个端到端可训练的深网,该网络通过结合两个视频提要的输入来执行光流估计和框架重建。该拟议的体系结构对现有视频框架插值进行了重大改进,并且根据目标PSNR和SSIM指标,REFSR技术。

Wide area surveillance has many applications and tracking of objects under observation is an important task, which often needs high spatio-temporal resolution (HSTR) video for better precision. This paper presents the usage of multiple video feeds for the generation of HSTR video as an extension of reference based super resolution (RefSR). One feed captures video at high spatial resolution with low frame rate (HSLF) while the other captures low spatial resolution and high frame rate (LSHF) video simultaneously for the same scene. The main purpose is to create an HSTR video from the fusion of HSLF and LSHF videos. In this paper we propose an end-to-end trainable deep network that performs optical flow estimation and frame reconstruction by combining inputs from both video feeds. The proposed architecture provides significant improvement over existing video frame interpolation and RefSR techniques in terms of objective PSNR and SSIM metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源