论文标题

NBIHT:一种用于1位压缩感测的有效算法,具有最佳的误差衰减率

NBIHT: An Efficient Algorithm for 1-bit Compressed Sensing with Optimal Error Decay Rate

论文作者

Friedlander, Michael P., Jeong, Halyun, Plan, Yaniv, Yilmaz, Ozgur

论文摘要

二元迭代硬阈值(BIHT)算法是一种流行的重建方法,用于一位压缩感,由于其简单和快速的经验收敛。关于BIHT的作品已经有几件作品,但是对相应近似误差和收敛率的理论理解仍然保持开放。 本文表明,BIHT(NBHIT)的归一化版本达到了对数因素的近似错误率。更确切地说,使用$ m $一$一位的测量值的$ s $ -sparse vector $ x $,我们证明NBIHT的近似错误是$ o \ weft(1 \ form m \ right)$ a to Googarithmic因素,这与信息理论的下限$ω\ weft(1 \ prove fym prove facte coped m \ weft m \ priff iover)uffectement uf prove。据我们所知,这是对BIHT型算法的第一次理论分析,该算法解释了在文献中从经验上观察到的最佳误差率。这也使NBIHT成为第一个可证明的计算效率的一位压缩算法,它破坏了反平方根误差衰减率$ o o \ left(1 \ over m^{1/2} \ right)$。

The Binary Iterative Hard Thresholding (BIHT) algorithm is a popular reconstruction method for one-bit compressed sensing due to its simplicity and fast empirical convergence. There have been several works about BIHT but a theoretical understanding of the corresponding approximation error and convergence rate still remains open. This paper shows that the normalized version of BIHT (NBHIT) achieves an approximation error rate optimal up to logarithmic factors. More precisely, using $m$ one-bit measurements of an $s$-sparse vector $x$, we prove that the approximation error of NBIHT is of order $O \left(1 \over m \right)$ up to logarithmic factors, which matches the information-theoretic lower bound $Ω\left(1 \over m \right)$ proved by Jacques, Laska, Boufounos, and Baraniuk in 2013. To our knowledge, this is the first theoretical analysis of a BIHT-type algorithm that explains the optimal rate of error decay empirically observed in the literature. This also makes NBIHT the first provable computationally-efficient one-bit compressed sensing algorithm that breaks the inverse square root error decay rate $O \left(1 \over m^{1/2} \right)$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源