论文标题

嵌套 - 韦森尔斯坦的自我记录学习序列生成

Nested-Wasserstein Self-Imitation Learning for Sequence Generation

论文作者

Zhang, Ruiyi, Chen, Changyou, Gan, Zhe, Wen, Zheng, Wang, Wenlin, Carin, Lawrence

论文摘要

加强学习(RL)已被广泛研究以改善序列产生模型。但是,用于RL训练的常规奖励通常无法捕获足够的语义信息,因此会导致模型偏差。此外,稀疏和延迟的奖励使RL探索效率低下。为了减轻这些问题,我们提出了嵌套 - 瓦瑟尔斯坦距离的概念,以进行分配语义匹配。为了进一步利用它,开发了一种新颖的嵌套 - 韦森尔斯坦自我象征学习框架,鼓励模型利用历史高奖励序列来增强探索和更好的语义匹配。我们的解决方案可以理解为大约通过Wasserstein Trust-Regions执行近端政策优化。对各种无条件和条件序列生成任务的实验证明了所提出的方法始终导致性能提高。

Reinforcement learning (RL) has been widely studied for improving sequence-generation models. However, the conventional rewards used for RL training typically cannot capture sufficient semantic information and therefore render model bias. Further, the sparse and delayed rewards make RL exploration inefficient. To alleviate these issues, we propose the concept of nested-Wasserstein distance for distributional semantic matching. To further exploit it, a novel nested-Wasserstein self-imitation learning framework is developed, encouraging the model to exploit historical high-rewarded sequences for enhanced exploration and better semantic matching. Our solution can be understood as approximately executing proximal policy optimization with Wasserstein trust-regions. Experiments on a variety of unconditional and conditional sequence-generation tasks demonstrate the proposed approach consistently leads to improved performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源