论文标题
噪声审核改善了道德基础分类
Noise Audits Improve Moral Foundation Classification
论文作者
论文摘要
道德在文化,认同和情感中起着重要作用。自然语言处理的最新进展表明,可以按大规模进行文本表达的道德价值观进行分类。道德分类依赖于人类注释来标记文本中的道德表达式,该文本提供了培训数据以实现最先进的表现。但是,这些注释本质上是主观的,并且某些实例很难分类,从而导致由于错误或缺乏共识而导致嘈杂的注释。训练数据中噪声的存在损害了分类器准确地识别文本中的道德基础的能力。我们建议两个指标来审核注释的噪音。第一个度量是实例标签的熵,这是对应该如何标记实例的注释分歧的替代度量。第二个指标是注释者分配给实例的标签的轮廓系数。该指标利用了具有相同标签的实例应具有相似的潜在表示,并且与集体判断的偏差表示错误。我们在三个广泛使用的道德基础数据集上进行的实验表明,基于建议的指标删除嘈杂的注释可改善分类性能。
Morality plays an important role in culture, identity, and emotion. Recent advances in natural language processing have shown that it is possible to classify moral values expressed in text at scale. Morality classification relies on human annotators to label the moral expressions in text, which provides training data to achieve state-of-the-art performance. However, these annotations are inherently subjective and some of the instances are hard to classify, resulting in noisy annotations due to error or lack of agreement. The presence of noise in training data harms the classifier's ability to accurately recognize moral foundations from text. We propose two metrics to audit the noise of annotations. The first metric is entropy of instance labels, which is a proxy measure of annotator disagreement about how the instance should be labeled. The second metric is the silhouette coefficient of a label assigned by an annotator to an instance. This metric leverages the idea that instances with the same label should have similar latent representations, and deviations from collective judgments are indicative of errors. Our experiments on three widely used moral foundations datasets show that removing noisy annotations based on the proposed metrics improves classification performance.