论文标题

Bigvgan:带有大规模培训的通用神经声码器

BigVGAN: A Universal Neural Vocoder with Large-Scale Training

论文作者

Lee, Sang-gil, Ping, Wei, Ginsburg, Boris, Catanzaro, Bryan, Yoon, Sungroh

论文摘要

尽管最新的生成对抗网络(GAN)基于模型以声学特征来生成原始波形,但在各种录制环境中为众多扬声器综合高保真音频是一项挑战。在这项工作中,我们提出了Bigvgan,这是一款通用的Vocoder,在不进行微调的情况下,可以很好地概括各种分布场景。我们将周期性激活函数和抗氧化表现引入到GAN发电机中,这带来了所需的电感偏置,以使音频合成并显着提高音频质量。此外,我们以最大的规模训练我们的gan Vocoder,最高为1.12亿个参数,这在文献中是前所未有的。我们在大规模的GAN训练中识别并解决了音频训练中的故障模式,同时保持高保真输出而不过度验证。我们的Bigvgan仅接受了干净的演讲(库)培训,可实现各种零射击(分发)条件的最新性能,包括看不见的扬声器,语言,录制环境,唱歌的声音,音乐和乐器音频。我们在以下网址发布代码和模型:https://github.com/nvidia/bigvgan

Despite recent progress in generative adversarial network (GAN)-based vocoders, where the model generates raw waveform conditioned on acoustic features, it is challenging to synthesize high-fidelity audio for numerous speakers across various recording environments. In this work, we present BigVGAN, a universal vocoder that generalizes well for various out-of-distribution scenarios without fine-tuning. We introduce periodic activation function and anti-aliased representation into the GAN generator, which brings the desired inductive bias for audio synthesis and significantly improves audio quality. In addition, we train our GAN vocoder at the largest scale up to 112M parameters, which is unprecedented in the literature. We identify and address the failure modes in large-scale GAN training for audio, while maintaining high-fidelity output without over-regularization. Our BigVGAN, trained only on clean speech (LibriTTS), achieves the state-of-the-art performance for various zero-shot (out-of-distribution) conditions, including unseen speakers, languages, recording environments, singing voices, music, and instrumental audio. We release our code and model at: https://github.com/NVIDIA/BigVGAN

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源