论文标题
Bertha:通过转移学习的人类评估评估视频字幕评估
BERTHA: Video Captioning Evaluation Via Transfer-Learned Human Assessment
论文作者
论文摘要
评估视频字幕系统是一项具有挑战性的任务,因为有多种因素需要考虑;例如:标题的流利度,在单个场景中发生多个动作以及被认为重要的偏见。大多数指标都试图衡量系统生成的字幕与单个或一组人类注销字幕的相似之处。本文提出了一种基于深度学习模型来评估这些系统的新方法。该模型基于BERT,该模型已被证明在多个NLP任务中效果很好。目的是使模型学习执行类似于人类的评估。为此,我们使用包含对系统生成的字幕评估的数据集。数据集由参与Trecvid视频的各个年份到文本任务的系统产生的字幕的人类判断。这些注释将公开可用。 Bertha获得有利的结果,表现优于某些设置中常用的指标。
Evaluating video captioning systems is a challenging task as there are multiple factors to consider; for instance: the fluency of the caption, multiple actions happening in a single scene, and the human bias of what is considered important. Most metrics try to measure how similar the system generated captions are to a single or a set of human-annotated captions. This paper presents a new method based on a deep learning model to evaluate these systems. The model is based on BERT, which is a language model that has been shown to work well in multiple NLP tasks. The aim is for the model to learn to perform an evaluation similar to that of a human. To do so, we use a dataset that contains human evaluations of system generated captions. The dataset consists of the human judgments of the captions produce by the system participating in various years of the TRECVid video to text task. These annotations will be made publicly available. BERTHA obtain favourable results, outperforming the commonly used metrics in some setups.