论文标题

基于自动编码器的共同训练多视图表示学习

Auto-Encoder based Co-Training Multi-View Representation Learning

论文作者

Lu, Run-kun, Liu, Jian-wei, Wang, Yuan-fang, Xie, Hao-jie, Zuo, Xin

论文摘要

多视图学习是一个学习问题,它利用对象的各种表示来挖掘有价值的知识并提高学习算法的性能,而多视图学习的重要方向之一是子空间学习。众所周知,自动编码器是一种深度学习的方法,它可以通过重建输入来学习原始数据的潜在特征,并基于此,我们提出了一种称为基于自动编码器的共同培训多视图学习(ACMVL)的新型算法(ACMVL),该算法利用了互补性和一致性和一致性和一致性和一致性的代表性,并找到了多个联合定位视图。该算法有两个阶段,第一个是训练每个视图的自动编码器,第二阶段是训练一个监督的网络。有趣的是,这两个阶段部分共享权重,并通过共同训练过程相互协助。根据实验结果,我们可以学习表现出色的潜在特征表示,并且每个视图的自动编码器都比传统的自动编码器具有更强大的重建能力。

Multi-view learning is a learning problem that utilizes the various representations of an object to mine valuable knowledge and improve the performance of learning algorithm, and one of the significant directions of multi-view learning is sub-space learning. As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views. The algorithm has two stages, the first is to train auto-encoder of each view, and the second stage is to train a supervised network. Interestingly, the two stages share the weights partly and assist each other by co-training process. According to the experimental result, we can learn a well performed latent feature representation, and auto-encoder of each view has more powerful reconstruction ability than traditional auto-encoder.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源