论文标题
使用扩张的卷积神经网络自动转录自动转录
Automatic Lyrics Transcription using Dilated Convolutional Neural Networks with Self-Attention
论文作者
论文摘要
语音识别是一个发达的研究领域,因此在软件行业的许多应用程序中都使用了当前的最新系统状态,但是到目前为止,仍然没有这种强大的系统来识别Singing Voice的单词和句子。本文提出了针对此任务的完整管道,通常将其称为自动歌词转录(ALT)。我们使用序列分类目标来构建声学模型,对单声karaoke记录进行了自我注意,并培训了卷积时间延迟的神经网络。本研究中使用的数据集,潮湿 - 唱歌! 300x30x2 [1]被过滤到只有英文歌词的歌曲。测试了不同的语言模型,包括基于Maxent和Recisurrent神经网络的方法,这些方法接受了英语流行歌曲的歌词的培训。对自我注意力的机制进行了深入的分析,同时调整其上下文宽度和注意力头的数量。使用最佳设置,我们的系统可以对ALT的最新设置进行显着改进,并为任务提供了新的基线。
Speech recognition is a well developed research field so that the current state of the art systems are being used in many applications in the software industry, yet as by today, there still does not exist such robust system for the recognition of words and sentences from singing voice. This paper proposes a complete pipeline for this task which may commonly be referred as automatic lyrics transcription (ALT). We have trained convolutional time-delay neural networks with self-attention on monophonic karaoke recordings using a sequence classification objective for building the acoustic model. The dataset used in this study, DAMP - Sing! 300x30x2 [1] is filtered to have songs with only English lyrics. Different language models are tested including MaxEnt and Recurrent Neural Networks based methods which are trained on the lyrics of pop songs in English. An in-depth analysis of the self-attention mechanism is held while tuning its context width and the number of attention heads. Using the best settings, our system achieves notable improvement to the state-of-the-art in ALT and provides a new baseline for the task.