论文标题

自适应支持驱动的贝叶斯重新加权算法,用于稀疏信号恢复

Adaptive support driven Bayesian reweighted algorithm for sparse signal recovery

论文作者

Li, Junlin, Zhou, Wei, Cheng, Cheng

论文摘要

稀疏学习已被广​​泛研究,以从系统识别提交的庞大数据源中捕获关键信息。通常,除输入输出关系外,还必须了解未知系统(例如生物网络)的内部工作机制。为此,已经开发了各种特征选择技术。例如,提议稀疏的贝叶斯学习(SBL)从基础函数的字典中学习主要特征,从而使已识别的模型可解释。经过重新加权的L1型算法通常在SBL中应用于解决优化问题。但是,它们在计算和内存方面都很昂贵,因此不适合大规模问题。本文提出了一种自适应支撑驱动的贝叶斯重新加权(ASDBR)算法,以用于稀疏信号恢复。开发了基于收缩率的重新启动策略来进行自适应支持估计,该估算可以有效地减少计算负担和记忆需求。此外,Asdbr准确地提取了主要功能,并排除了大型数据集中的冗余信息。数值实验证明了所提出的算法优于最先进的方法。

Sparse learning has been widely studied to capture critical information from enormous data sources in the filed of system identification. Often, it is essential to understand internal working mechanisms of unknown systems (e.g. biological networks) in addition to input-output relationships. For this purpose, various feature selection techniques have been developed. For example, sparse Bayesian learning (SBL) was proposed to learn major features from a dictionary of basis functions, which makes identified models interpretable. Reweighted L1-regularization algorithms are often applied in SBL to solve optimization problems. However, they are expensive in both computation and memory aspects, thus not suitable for large-scale problems. This paper proposes an adaptive support driven Bayesian reweighted (ASDBR) algorithm for sparse signal recovery. A restart strategy based on shrinkage-thresholding is developed to conduct adaptive support estimate, which can effectively reduce computation burden and memory demands. Moreover, ASDBR accurately extracts major features and excludes redundant information from large datasets. Numerical experiments demonstrate the proposed algorithm outperforms state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源