论文标题
通过多次评估一些独特的候选者来扩展高斯流程优化
Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times
论文作者
论文摘要
计算高斯过程(GP)后验在历史点的数量中具有计算成本立方体。对同一GP后部的重新阐明,这种复杂性主要取决于考虑多少\ emph {唯一}的历史点。这可能在主动学习设置中具有重要意义,其中历史点是由学习者依次构建的。我们表明,基于GPS(GP-OPT)的顺序黑盒优化,可以通过坚持多个评估步骤的候选解决方案来有效,并且仅在必要时才切换。限制开关的数量还限制了GP历史记录中唯一点的数量。因此,有效的GP重新印度可用于精确,便宜地计算运行GP-OPT算法所需的后代。这种方法在高开关成本的GP-OPT的现实应用中特别有用(例如,在湿实验室中切换化学品,在超参数优化中进行数据/模型加载)。作为此元信息的示例,我们修改了两种良好的GP-OPT算法GP-UCB和GP-EI,以从批处理的GP-OPT中尽可能不经常地调整候选者。这些版本保留了所有理论上的无重格保证,同时改善了算法的实际方面,例如运行时,记忆复杂性以及批处理候选人并并行评估它们的能力。
Computing a Gaussian process (GP) posterior has a computational cost cubical in the number of historical points. A reformulation of the same GP posterior highlights that this complexity mainly depends on how many \emph{unique} historical points are considered. This can have important implication in active learning settings, where the set of historical points is constructed sequentially by the learner. We show that sequential black-box optimization based on GPs (GP-Opt) can be made efficient by sticking to a candidate solution for multiple evaluation steps and switch only when necessary. Limiting the number of switches also limits the number of unique points in the history of the GP. Thus, the efficient GP reformulation can be used to exactly and cheaply compute the posteriors required to run the GP-Opt algorithms. This approach is especially useful in real-world applications of GP-Opt with high switch costs (e.g. switching chemicals in wet labs, data/model loading in hyperparameter optimization). As examples of this meta-approach, we modify two well-established GP-Opt algorithms, GP-UCB and GP-EI, to switch candidates as infrequently as possible adapting rules from batched GP-Opt. These versions preserve all the theoretical no-regret guarantees while improving practical aspects of the algorithms such as runtime, memory complexity, and the ability of batching candidates and evaluating them in parallel.