论文标题

Trbllmaker-变压器在歌词线制造商之间读取

TRBLLmaker -- Transformer Reads Between Lyrics Lines maker

论文作者

Ventura, Mor, Toker, Michael

论文摘要

即使对我们来说,理解歌曲的含义也可能具有挑战性。作为该项目的一部分,我们探讨了产生歌曲含义的过程。尽管文本到文本模型广泛使用,但很少有尝试实现类似目标的尝试。歌曲主要是在情感分析的背景下研究的。这涉及确定文本中的观点和情感,将其评估为正面或负面,并利用这些评估来提出音乐建议。在本文中,我们提出了一种生成模型,该模型为歌曲的几行提供了隐式含义。我们的模型使用解码器变压器架构GPT-2,其中输入是歌曲的歌词。此外,我们将该体系结构的性能与T5模型的编码器变压器体系结构的性能进行了比较。我们还检查了不同提示类型的效果,并选择了附加其他信息的选择,例如艺术家的名称和歌曲的标题。此外,我们通过不同的训练参数测试了不同的解码方法,并使用胭脂评估了我们的结果。为了构建我们的数据集,我们利用了“天才” API,这使我们能够获取歌曲的歌词及其解释以及他们丰富的元数据。

Even for us, it can be challenging to comprehend the meaning of songs. As part of this project, we explore the process of generating the meaning of songs. Despite the widespread use of text-to-text models, few attempts have been made to achieve a similar objective. Songs are primarily studied in the context of sentiment analysis. This involves identifying opinions and emotions in texts, evaluating them as positive or negative, and utilizing these evaluations to make music recommendations. In this paper, we present a generative model that offers implicit meanings for several lines of a song. Our model uses a decoder Transformer architecture GPT-2, where the input is the lyrics of a song. Furthermore, we compared the performance of this architecture with that of the encoder-decoder Transformer architecture of the T5 model. We also examined the effect of different prompt types with the option of appending additional information, such as the name of the artist and the title of the song. Moreover, we tested different decoding methods with different training parameters and evaluated our results using ROUGE. In order to build our dataset, we utilized the 'Genious' API, which allowed us to acquire the lyrics of songs and their explanations, as well as their rich metadata.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源