论文标题
通过有条件产生的对抗网络的情感面部产生的类别类标签插值
Intercategorical Label Interpolation for Emotional Face Generation with Conditional Generative Adversarial Networks
论文作者
论文摘要
生成的对抗网络提供了产生欺骗性真实图像的可能性,这些图像几乎与实际照片没有区别。但是,此类系统依靠大型数据集的存在来实际复制相应的域。如果要生成随机的新图像,这尤其是一个问题,而且要共同建模特定的(连续)特征。 \ emph {人类计算机相互作用}(HCI)研究中的一个特别重要的用例是人脸的情感图像的产生,可用于各种用例,例如自动产生的化身。因此,问题在于培训数据的可用性。最适合此任务的数据集取决于分类情绪模型,因此仅具有离散的注释标签。这极大地阻碍了显示的情感状态之间平稳过渡的学习和建模。为了克服这一挑战,我们探讨了标签插值的潜力,以增强在分类数据集中训练的网络,并能够生成以连续特征为条件的图像。
Generative adversarial networks offer the possibility to generate deceptively real images that are almost indistinguishable from actual photographs. Such systems however rely on the presence of large datasets to realistically replicate the corresponding domain. This is especially a problem if not only random new images are to be generated, but specific (continuous) features are to be co-modeled. A particularly important use case in \emph{Human-Computer Interaction} (HCI) research is the generation of emotional images of human faces, which can be used for various use cases, such as the automatic generation of avatars. The problem hereby lies in the availability of training data. Most suitable datasets for this task rely on categorical emotion models and therefore feature only discrete annotation labels. This greatly hinders the learning and modeling of smooth transitions between displayed affective states. To overcome this challenge, we explore the potential of label interpolation to enhance networks trained on categorical datasets with the ability to generate images conditioned on continuous features.