论文标题
多视觉声嵌入的不对称代理损失
Asymmetric Proxy Loss for Multi-View Acoustic Word Embeddings
论文作者
论文摘要
声词嵌入(AWES)是语音段的歧视性表示,并且学到的嵌入空间反映了单词之间的语音相似性。通过多视图学习,文本标签被认为是补充输入,Awes是通过声学基础的单词嵌入(AGWES)共同训练的。在本文中,我们将多视图方法扩展到基于代理的框架,以通过将AGWE等同于代理来进行深度度量学习。计算相似性矩阵时的简单修改使一般对加权能够制定数据之间的关系。在系统化的框架下,我们提出了一种不对称的损失,该损失在保持其优点的同时不对称地结合了不同部分的损失功能。这是遵循以下假设:锚阳性对的最佳函数可能与锚定值对的最佳函数可能不同,而当代理替换为三胞胎中的不同位置时,代理可能会产生不同的影响。我们介绍了比较实验,具有各种基于代理的损失,包括我们的不对称 - 损失,并评估WSJ语料库上的单词歧视任务的敬畏和agwes。结果证明了该方法的有效性。
Acoustic word embeddings (AWEs) are discriminative representations of speech segments, and learned embedding space reflects the phonetic similarity between words. With multi-view learning, where text labels are considered as supplementary input, AWEs are jointly trained with acoustically grounded word embeddings (AGWEs). In this paper, we expand the multi-view approach into a proxy-based framework for deep metric learning by equating AGWEs with proxies. A simple modification in computing the similarity matrix allows the general pair weighting to formulate the data-to-proxy relationship. Under the systematized framework, we propose an asymmetric-proxy loss that combines different parts of loss functions asymmetrically while keeping their merits. It follows the assumptions that the optimal function for anchor-positive pairs may differ from one for anchor-negative pairs, and a proxy may have a different impact when it substitutes for different positions in the triplet. We present comparative experiments with various proxy-based losses including our asymmetric-proxy loss, and evaluate AWEs and AGWEs for word discrimination tasks on WSJ corpus. The results demonstrate the effectiveness of the proposed method.