论文标题
元学习以最小化简单的遗憾
Meta-Learning for Simple Regret Minimization
论文作者
论文摘要
我们开发了一个元学习框架,以简单的遗憾最小化。在此框架中,学习代理与一系列匪徒任务进行交互,这些任务是从未知的先前分布中取样的,并学习其元参数以更好地执行未来任务。我们为此设置提出了第一种贝叶斯和频繁的元学习算法。贝叶斯算法可以访问Meta-Parameters的先前分布,其Meta Simple Simple Simple对使用Horizon $ n $的$ M $ BANDIT任务仅仅是$ \ tilde {o}(M / \ sqrt {n})$。另一方面,频繁算法的元简单遗憾是$ \ tilde {o}(\ sqrt {m} n + m/ \ sqrt {n})$。遗憾的是,频繁的算法更为笼统,因为它不需要对元参数的先前分布。也可以在更多的设置中进行分析。我们将算法实例化,以解决多种匪徒问题。我们的算法是一般的,我们通过在几种环境中经验评估它们来补充我们的理论。
We develop a meta-learning framework for simple regret minimization in bandits. In this framework, a learning agent interacts with a sequence of bandit tasks, which are sampled i.i.d.\ from an unknown prior distribution, and learns its meta-parameters to perform better on future tasks. We propose the first Bayesian and frequentist meta-learning algorithms for this setting. The Bayesian algorithm has access to a prior distribution over the meta-parameters and its meta simple regret over $m$ bandit tasks with horizon $n$ is mere $\tilde{O}(m / \sqrt{n})$. On the other hand, the meta simple regret of the frequentist algorithm is $\tilde{O}(\sqrt{m} n + m/ \sqrt{n})$. While its regret is worse, the frequentist algorithm is more general because it does not need a prior distribution over the meta-parameters. It can also be analyzed in more settings. We instantiate our algorithms for several classes of bandit problems. Our algorithms are general and we complement our theory by evaluating them empirically in several environments.