论文标题

深层子空间集群的多级表示学习

Multi-Level Representation Learning for Deep Subspace Clustering

论文作者

Kheirandishfard, Mohsen, Zohrizadeh, Fariba, Kamangar, Farhad

论文摘要

本文提出了一种新型的深层子空间聚类方法,该方法使用卷积自动编码器将输入图像转换为位于线性子空间结合的新表示形式。我们工作的第一个贡献是在编码层及其相应的解码器层之间插入多个完全连接的线性层,以促进学习对子空间聚类的更有利表示。这些连接层通过组合低级和高级信息来促进特征学习过程,以在编码器的不同级别上生成多组自表达和信息性表示。此外,我们引入了一个新颖的损失最小化问题,该问题利用样品的初始聚类有效地融合了多级表示并更准确地恢复了基础子空间。然后,通过迭代方案最小化损耗函数,该方案可更新网络参数并产生样本的新聚类。在四个现实世界数据集上的实验表明,与大多数子空间聚类问题的最新方法相比,我们的方法表现出卓越的性能。

This paper proposes a novel deep subspace clustering approach which uses convolutional autoencoders to transform input images into new representations lying on a union of linear subspaces. The first contribution of our work is to insert multiple fully-connected linear layers between the encoder layers and their corresponding decoder layers to promote learning more favorable representations for subspace clustering. These connection layers facilitate the feature learning procedure by combining low-level and high-level information for generating multiple sets of self-expressive and informative representations at different levels of the encoder. Moreover, we introduce a novel loss minimization problem which leverages an initial clustering of the samples to effectively fuse the multi-level representations and recover the underlying subspaces more accurately. The loss function is then minimized through an iterative scheme which alternatively updates the network parameters and produces new clusterings of the samples. Experiments on four real-world datasets demonstrate that our approach exhibits superior performance compared to the state-of-the-art methods on most of the subspace clustering problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源