论文标题

当近似硬度达到学习的硬度时

When Hardness of Approximation Meets Hardness of Learning

论文作者

Malach, Eran, Shalev-Shwartz, Shai

论文摘要

有监督的学习算法可以访问标记示例的分布,并且需要返回正确标记示例的函数(假设)。学习者的假设取自一些固定类别的功能(例如线性分类器,神经网络等)。学习算法的失败可能是由于两个可能的原因而发生的:错误选择假设类(近似的硬度),或者未能在假设类别中找到最佳功能(学习的硬度)。尽管近似和可学习性对于算法的成功都很重要,但通常会单独研究它们。在这项工作中,我们显示了一个单一的硬度属性,这意味着使用线性类别和浅网络的近似硬度,以及使用相关查询和梯度降低的学习硬度。这使我们能够获得有关均等功能,DNF公式和$ AC^0 $电路的近似和可学习性的硬度的新结果。

A supervised learning algorithm has access to a distribution of labeled examples, and needs to return a function (hypothesis) that correctly labels the examples. The hypothesis of the learner is taken from some fixed class of functions (e.g., linear classifiers, neural networks etc.). A failure of the learning algorithm can occur due to two possible reasons: wrong choice of hypothesis class (hardness of approximation), or failure to find the best function within the hypothesis class (hardness of learning). Although both approximation and learnability are important for the success of the algorithm, they are typically studied separately. In this work, we show a single hardness property that implies both hardness of approximation using linear classes and shallow networks, and hardness of learning using correlation queries and gradient-descent. This allows us to obtain new results on hardness of approximation and learnability of parity functions, DNF formulas and $AC^0$ circuits.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源