论文标题
SpeedySpeech:有效的神经语音综合
SpeedySpeech: Efficient Neural Speech Synthesis
论文作者
论文摘要
尽管最近的神经序列到序列模型已大大提高了语音综合的质量,但没有能够同时进行快速训练,快速推理和高质量音频合成的系统。我们提出了一个学生教师网络,该网络能够具有高质量的比实时光谱图合成速度,对计算资源和快速培训时间的要求较低。我们表明,自我发作层对于产生高质量的音频不是必需的。我们利用学生和教师网络中剩余连接的简单卷积块,并在教师模型中仅使用一个注意力层。再加上梅尔根(Melgan)的Vocoder,我们的模型的语音质量明显高于Tacotron 2。我们的模型可以在单个GPU上有效训练,即使在CPU上也可以实时运行。我们在GitHub存储库中同时提供源代码和音频样本。
While recent neural sequence-to-sequence models have greatly improved the quality of speech synthesis, there has not been a system capable of fast training, fast inference and high-quality audio synthesis at the same time. We propose a student-teacher network capable of high-quality faster-than-real-time spectrogram synthesis, with low requirements on computational resources and fast training time. We show that self-attention layers are not necessary for generation of high quality audio. We utilize simple convolutional blocks with residual connections in both student and teacher networks and use only a single attention layer in the teacher model. Coupled with a MelGAN vocoder, our model's voice quality was rated significantly higher than Tacotron 2. Our model can be efficiently trained on a single GPU and can run in real time even on a CPU. We provide both our source code and audio samples in our GitHub repository.