论文标题
单调注意模型的CTC同步训练
CTC-synchronous Training for Monotonic Attention Model
论文作者
论文摘要
已经研究了基于序列到序列框架的在线流媒体自动语音识别(ASR)的单调关注(MOCHA)。与连接派时间分类(CTC)相反,由于解码器的左右依赖性,在训练期间的对齐边缘化过程中,向后概率无法利用。这导致对随后的令牌生成的比对误差传播。为了解决这个问题,我们提出了CTC同步训练(CTC-ST),其中Mocha使用CTC对齐来学习最佳的单调一致性。参考CTC比对从与解码器共享相同编码器的CTC分支提取。整个模型被共同优化,以便将摩卡咖啡的预期边界与对准同步。对TEDLIUM REAVER-2和LibrisPeech Corpora的实验评估表明,所提出的方法显着提高了识别,尤其是对于长话而言。我们还表明,CTC-ST可以为摩卡咖啡的规范带来全部潜力。
Monotonic chunkwise attention (MoChA) has been studied for the online streaming automatic speech recognition (ASR) based on a sequence-to-sequence framework. In contrast to connectionist temporal classification (CTC), backward probabilities cannot be leveraged in the alignment marginalization process during training due to left-to-right dependency in the decoder. This results in the error propagation of alignments to subsequent token generation. To address this problem, we propose CTC-synchronous training (CTC-ST), in which MoChA uses CTC alignments to learn optimal monotonic alignments. Reference CTC alignments are extracted from a CTC branch sharing the same encoder with the decoder. The entire model is jointly optimized so that the expected boundaries from MoChA are synchronized with the alignments. Experimental evaluations of the TEDLIUM release-2 and Librispeech corpora show that the proposed method significantly improves recognition, especially for long utterances. We also show that CTC-ST can bring out the full potential of SpecAugment for MoChA.