论文标题
通过公正优化减轻多模式VAE中的模态崩溃
Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization
论文作者
论文摘要
最近出现了许多变异自动编码器(VAE),目的是建模多模式数据,例如,共同建模图像及其相应的字幕。尽管如此,多模式的VAE倾向于仅通过在忽略标题的同时拟合图像来关注模式的子集。我们将此限制称为模态崩溃。在这项工作中,我们认为这种效果是多模式VAE训练期间梯度冲突的结果。我们展示了如何检测梯度冲突(公正性块)的计算图中的子图形,以及如何利用从多任务学习到减轻模态崩溃的现有梯度冲突解决方案。也就是说,确保跨模式的公正优化。我们将培训框架应用于文献中的几种多模式VAE模型,损失和数据集,并从经验上表明,我们的框架显着改善了跨模态的潜在空间的重建性能,有条件产生和连贯性。
A number of variational autoencoders (VAEs) have recently emerged with the aim of modeling multimodal data, e.g., to jointly model images and their corresponding captions. Still, multimodal VAEs tend to focus solely on a subset of the modalities, e.g., by fitting the image while neglecting the caption. We refer to this limitation as modality collapse. In this work, we argue that this effect is a consequence of conflicting gradients during multimodal VAE training. We show how to detect the sub-graphs in the computational graphs where gradients conflict (impartiality blocks), as well as how to leverage existing gradient-conflict solutions from multitask learning to mitigate modality collapse. That is, to ensure impartial optimization across modalities. We apply our training framework to several multimodal VAE models, losses and datasets from the literature, and empirically show that our framework significantly improves the reconstruction performance, conditional generation, and coherence of the latent space across modalities.