论文标题

k-deep单纯:通过本地词典进行深层的多种学习

K-Deep Simplex: Deep Manifold Learning via Local Dictionaries

论文作者

Tankala, Pranay, Tasissa, Abiy, Murphy, James M., Ba, Demba

论文摘要

我们提出了K-Deep单纯形(KDS),鉴于一组数据点,它学习了一个包含合成地标的字典,以及在单纯形上支持的表示系数。 KDS采用本地加权$ \ ell_1 $罚款,鼓励每个数据点表示自己是附近地标的凸组合。我们使用交替的最小化解决了提出的优化程序,并使用算法展开设计有效的,可解释的自动编码器。从理论上讲,我们通过将KDS中的加权$ \ ell_1 $罚款与加权$ \ ell_0 $程序联系起来来分析提出的程序。假设数据是从Delaunay三角剖分生成的,我们证明了加权$ \ ell_1 $和加权$ \ ell_0 $程序的等效性。我们进一步显示了在轻微的几何假设下代表系数的稳定性。如果表示表示系数是固定的,我们证明最小化字典的子问题会产生独特的解决方案。此外,我们表明,可以从系数矩阵的协方差有效地获得低维表示。实验表明,该算法效率高,并且在合成和真实数据集方面具有竞争力。

We propose K-Deep Simplex(KDS) which, given a set of data points, learns a dictionary comprising synthetic landmarks, along with representation coefficients supported on a simplex. KDS employs a local weighted $\ell_1$ penalty that encourages each data point to represent itself as a convex combination of nearby landmarks. We solve the proposed optimization program using alternating minimization and design an efficient, interpretable autoencoder using algorithm unrolling. We theoretically analyze the proposed program by relating the weighted $\ell_1$ penalty in KDS to a weighted $\ell_0$ program. Assuming that the data are generated from a Delaunay triangulation, we prove the equivalence of the weighted $\ell_1$ and weighted $\ell_0$ programs. We further show the stability of the representation coefficients under mild geometrical assumptions. If the representation coefficients are fixed, we prove that the sub-problem of minimizing over the dictionary yields a unique solution. Further, we show that low-dimensional representations can be efficiently obtained from the covariance of the coefficient matrix. Experiments show that the algorithm is highly efficient and performs competitively on synthetic and real data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源