论文标题
BERT进行情感分析:预先训练和微调的替代方案
BERT for Sentiment Analysis: Pre-trained and Fine-Tuned Alternatives
论文作者
论文摘要
伯特通过大型语言模型启用转移学习来彻底改变了NLP字段,该模型可以捕获复杂的文本模式,从而达到了表达数量的NLP应用程序的最新技术。对于文本分类任务,BERT已经进行了广泛的探索。但是,诸如如何更好地应对BERT输出层提供的不同嵌入以及语言特定模型的使用之类的方面在文献中没有很好地研究,尤其是对于巴西葡萄牙语而言。本文的目的是进行一项有关汇总BERT输出层中产生的特征的不同策略的广泛实验研究,重点是情感分析任务。实验包括接受巴西葡萄牙语料库训练的BERT模型和多语言版本,考虑了多个聚合策略和开源数据集,并具有预定义的培训,验证和测试分区,以促进结果的可重复性。与TF-IDF相比,BERT在大多数情况下达到了最高的ROC-AUC值。但是,TF-IDF代表了预测性能和计算成本之间的良好权衡。
BERT has revolutionized the NLP field by enabling transfer learning with large language models that can capture complex textual patterns, reaching the state-of-the-art for an expressive number of NLP applications. For text classification tasks, BERT has already been extensively explored. However, aspects like how to better cope with the different embeddings provided by the BERT output layer and the usage of language-specific instead of multilingual models are not well studied in the literature, especially for the Brazilian Portuguese language. The purpose of this article is to conduct an extensive experimental study regarding different strategies for aggregating the features produced in the BERT output layer, with a focus on the sentiment analysis task. The experiments include BERT models trained with Brazilian Portuguese corpora and the multilingual version, contemplating multiple aggregation strategies and open-source datasets with predefined training, validation, and test partitions to facilitate the reproducibility of the results. BERT achieved the highest ROC-AUC values for the majority of cases as compared to TF-IDF. Nonetheless, TF-IDF represents a good trade-off between the predictive performance and computational cost.