论文标题

部分可观测时空混沌系统的无模型预测

Learnable Graph Convolutional Network and Feature Fusion for Multi-view Learning

论文作者

Chen, Zhaoliang, Fu, Lele, Yao, Jie, Guo, Wenzhong, Plant, Claudia, Wang, Shiping

论文摘要

在实际应用中,从各种角度来描述目标的多视图数据可以促进学习算法的准确性。但是,鉴于多视图数据,通过图形卷积网络同时学习判别节点关系和图形信息的工作有限,近年来吸引了大量研究人员的注意力。大多数现有方法仅考虑邻接矩阵的加权总和,但是特征和图融合的联合神经网络仍未探索。为了应对这些问题,本文提出了一个联合深度学习框架,称为可学习的图形卷积网络和特征融合(LGCN-FF),由两个阶段组成:特征融合网络和可学习的图形卷积网络。前者的目的是从异质视图中学习潜在的特征表示形式,而后者通过可学习的权重探索了更具歧视性的图形融合,并且参数激活函数称为可分配可区分的收缩激活(DSA)功能。所提出的LGCN-FF经过验证,在多视图半监督分类中优于各种最新方法。

In practical applications, multi-view data depicting objectives from assorted perspectives can facilitate the accuracy increase of learning algorithms. However, given multi-view data, there is limited work for learning discriminative node relationships and graph information simultaneously via graph convolutional network that has drawn the attention from considerable researchers in recent years. Most of existing methods only consider the weighted sum of adjacency matrices, yet a joint neural network of both feature and graph fusion is still under-explored. To cope with these issues, this paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF), consisting of two stages: feature fusion network and learnable graph convolutional network. The former aims to learn an underlying feature representation from heterogeneous views, while the latter explores a more discriminative graph fusion via learnable weights and a parametric activation function dubbed Differentiable Shrinkage Activation (DSA) function. The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源