论文标题
基于神经网络的TTS的句法表示学习,句法解析树遍历
Syntactic representation learning for neural network based TTS with syntactic parse tree traversal
论文作者
论文摘要
句子文本的句法结构与语音的韵律结构相关,这对于改善文本到语音(TTS)系统的韵律和自然性至关重要。如今,TTS系统通常试图将句法结构信息与基于专家知识的手动设计功能合并。在本文中,我们提出了一种基于句法解析树遍历的句法表示学习方法,以自动利用句法结构信息。两个组成标签序列通过组成部分的左前和右遍历线性化。然后,通过相应的单向门控复发单元(GRU)网络从每个组成标签序列中提取句法表示。同时,引入了核 - 符号最大化损失,以增强成分标签嵌入的辨别性和多样性。将上采样的句法表示和音素嵌入被串联以用作Tacotron2的编码器输入。实验结果证明了我们提出的方法的有效性,与基线相比,平均意见评分(MOS)从3.70增加到3.82,ABX偏好超过17%。此外,对于具有多个句法解析树的句子,可以从合成的演讲中清楚地看到韵律差异。
Syntactic structure of a sentence text is correlated with the prosodic structure of the speech that is crucial for improving the prosody and naturalness of a text-to-speech (TTS) system. Nowadays TTS systems usually try to incorporate syntactic structure information with manually designed features based on expert knowledge. In this paper, we propose a syntactic representation learning method based on syntactic parse tree traversal to automatically utilize the syntactic structure information. Two constituent label sequences are linearized through left-first and right-first traversals from constituent parse tree. Syntactic representations are then extracted at word level from each constituent label sequence by a corresponding uni-directional gated recurrent unit (GRU) network. Meanwhile, nuclear-norm maximization loss is introduced to enhance the discriminability and diversity of the embeddings of constituent labels. Upsampled syntactic representations and phoneme embeddings are concatenated to serve as the encoder input of Tacotron2. Experimental results demonstrate the effectiveness of our proposed approach, with mean opinion score (MOS) increasing from 3.70 to 3.82 and ABX preference exceeding by 17% compared with the baseline. In addition, for sentences with multiple syntactic parse trees, prosodic differences can be clearly perceived from the synthesized speeches.