论文标题
使用数学优化对线性模型进行1个标准和稀疏性交易:1-摩尔最小化部分反射性AH-对称性概括性反相
Trading off 1-norm and sparsity against rank for linear models using mathematical optimization: 1-norm minimizing partially reflexive ah-symmetric generalized inverses
论文作者
论文摘要
M-P(Moore-Penrose)伪源具有关键应用程序的计算线性方程式不一致系统的最小二乘解决方案。不论给定输入矩阵是否稀疏,其M-P伪符号都可能很密集,可能导致高计算负担,尤其是当我们处理高维矩阵时。 M-p伪verse的特征是四个属性,但是对于最小二乘溶液的计算,只有两个需要满足其中两个属性。 Fampa和Lee(2018)和Xu,Fampa,Lee和Ponte(2019)提出了本地搜索程序,以构建满足两个关键M-P属性的稀疏结构的概括性逆,以及另外一种(所谓的反射性能)。该额外的M-P属性等效于对广义逆施加最小级别。 (矢量)1-符号最小化用于诱导稀疏性,重要的是,将构造的广义内部的条目幅度保持在控制中。在这里,我们研究了可以在最小二乘解决方案计算中使用的广义倒置的低1-纳米和低等级之间的权衡。我们提出了几种算法方法,这些方法从$ 1 $ norm最大程度地减少了满足两个关键M-P属性的广义倒数,并通过迭代施加反射性属性,从而逐渐降低其等级。算法迭代,直到广义倒数的等级最小。在迭代过程中,我们生产中间解决方案,将低1型(通常高的稀疏度)交易为低级。
The M-P (Moore-Penrose) pseudoinverse has as a key application the computation of least-squares solutions of inconsistent systems of linear equations. Irrespective of whether a given input matrix is sparse, its M-P pseudoinverse can be dense, potentially leading to high computational burden, especially when we are dealing with high-dimensional matrices. The M-P pseudoinverse is uniquely characterized by four properties, but only two of them need to be satisfied for the computation of least-squares solutions. Fampa and Lee (2018) and Xu, Fampa, Lee, and Ponte (2019) propose local-search procedures to construct sparse block-structured generalized inverses that satisfy the two key M-P properties, plus one more (the so-called reflexive property). That additional M-P property is equivalent to imposing a minimum-rank condition on the generalized inverse. (Vector) 1-norm minimization is used to induce sparsity and, importantly, to keep the magnitudes of entries under control for the generalized-inverses constructed. Here, we investigate the trade-off between low 1-norm and low rank for generalized inverses that can be used in the computation of least-squares solutions. We propose several algorithmic approaches that start from a $1$-norm minimizing generalized inverse that satisfies the two key M-P properties, and gradually decrease its rank, by iteratively imposing the reflexive property. The algorithms iterate until the generalized inverse has the least possible rank. During the iterations, we produce intermediate solutions, trading off low 1-norm (and typically high sparsity) against low rank.