论文标题
稀疏线性回归 - 粘块实现理想\ emph {extcript} ml
Sparse linear regression -- CLuP achieves the ideal \emph{exact} ML
论文作者
论文摘要
在本文中,我们重新审视了经典的统计问题之一,即所谓的稀疏最大样本(ML)线性回归。作为攻击这种类型的回归的一种方式,我们提出了一种新颖的粘块机制,在某种程度上,该机制取决于\ bl {\ textbf {随机偶性理论(rdt)}}基于我们最近引入的算法机械\ cite {Stojnicclupint19,Stojnicclupcmpl19,Stojnicclupplt19,Stojniccluplargesc20,Stojniccluprephasate20}。在达到确切的ML性能的最初成功之后,在\ cite {stojnicclupint19,stojnicclupcmpl19,stojnicclupplt19}中保持出色的计算复杂性与MIMO ML检测相关的特性,自然而然地期望类似的成功类型可以实现其他ML,这是一个自然而然的。我们在这里提出的结果证实,这种期望确实是合理的。特别是,在稀疏回归上下文中,引入的粘块机制确实能够\ bl {\ textbf {\ emph {实现理想的ML性能}}}}。此外,它可以显着胜过一些最早期的艺术算法概念,甚至包括\ cite {stojnicprdepsocp10,stojnicgenlasso10,stojnicgensocp10}的著名拉索和socp的变体。另外,我们在\ cite {stojniccluplargesc20,stojniccluprephasated20}中提出的最新结果表明,粘块具有极好的\ bl {\ textbf {\ emph {groun-scale}}}}}}}}}}和所谓的\ bl {\ textbf {\ textbf {\ textbf {\ textbf {repph} rep}由于在稀疏回归上下文中,这种大规模的算法特征可能甚至更为可取,因此我们在这里还表明,可以重新制定基本的粘贴想法,以相对轻松解决\ bl {\ textbf {\ textbf {\ emph {\ emph {\ emph {\ emph {\ emph {\ emph {\ emph {\ emph {几千}}}}}}}的未知数。
In this paper we revisit one of the classical statistical problems, the so-called sparse maximum-likelihood (ML) linear regression. As a way of attacking this type of regression, we present a novel CLuP mechanism that to a degree relies on the \bl{\textbf{Random Duality Theory (RDT)}} based algorithmic machinery that we recently introduced in \cite{Stojnicclupint19,Stojnicclupcmpl19,Stojnicclupplt19,Stojniccluplargesc20,Stojniccluprephased20}. After the initial success that the CLuP exhibited in achieving the exact ML performance while maintaining excellent computational complexity related properties in MIMO ML detection in \cite{Stojnicclupint19,Stojnicclupcmpl19,Stojnicclupplt19}, one would naturally expect that a similar type of success can be achieved in other ML considerations. The results that we present here confirm that such an expectation is indeed reasonable. In particular, within the sparse regression context, the introduced CLuP mechanism indeed turns out to be able to \bl{\textbf{\emph{achieve the ideal ML performance}}}. Moreover, it can substantially outperform some of the most prominent earlier state of the art algorithmic concepts, among them even the variants of the famous LASSO and SOCP from \cite{StojnicPrDepSocp10,StojnicGenLasso10,StojnicGenSocp10}. Also, our recent results presented in \cite{Stojniccluplargesc20,Stojniccluprephased20} showed that the CLuP has excellent \bl{\textbf{\emph{large-scale}}} and the so-called \bl{\textbf{\emph{rephasing}}} abilities. Since such large-scale algorithmic features are possibly even more desirable within the sparse regression context we here also demonstrate that the basic CLuP ideas can be reformulated to enable solving with a relative ease the regression problems with \bl{\textbf{\emph{several thousands}}} of unknowns.