论文标题

减少积极学习的困惑

Reducing Confusion in Active Learning for Part-Of-Speech Tagging

论文作者

Chaudhary, Aditi, Anastasopoulos, Antonios, Sheikh, Zaid, Neubig, Graham

论文摘要

主动学习(AL)使用数据选择算法选择有用的培训样本,以最大程度地减少注释成本。现在,这是构建低资源句法分析仪(例如言论部分(POS)标签者)的重要工具。现有的AL启发式方法通常是根据选择不确定但具有代表性的培训实例的原则设计的,在这些情况下,注释这些实例可能会减少大量错误。但是,在一项跨越六种类型的多样性语言(德语,瑞典语,加利西亚语,北萨米语,波斯语和乌克兰人)的实证研究中,我们发现令人惊讶的结果即使在甲骨文的情况下,我们知道这些当前的启发式启发式的不确定性,这些当前的启发式学也远非最佳。基于此分析,我们提出了AL的问题,因为选择了实例,从而最大程度地减少了特定的输出标签之间的混淆。对上述语言进行的广泛实验表明,我们提出的AL策略的表现优于其他AL策略。我们还提出了辅助结果,证明了模型正确校准的重要性,我们通过跨视图训练确保了辅助性结果,并分析了我们提出的策略如何选择更紧密地遵循Oracle数据分布的示例。

Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances which maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源