论文标题

通过进化策略进行在线超参数调整

Online Hyper-parameter Tuning in Off-policy Learning via Evolutionary Strategies

论文作者

Tang, Yunhao, Choromanski, Krzysztof

论文摘要

众所周知,非政策学习算法对选择超参数的选择很敏感。但是,与可以通过例如通过例如,例如,可以通过例如。元梯度,类似的技术不能直接应用于非政策学习。在这项工作中,我们提出了一个框架,该框架需要将进化策略应用于在线超级参数调整,以在非政策学习中进行。我们的公式通过相对较低的维度搜索空间,与元梯度的密切连接与黑盒优化的强度相关。我们表明,我们的方法的表现优于最先进的非货币学习基线,该基线具有静态的超参数和最近的近期连续控制基准测试。

Off-policy learning algorithms have been known to be sensitive to the choice of hyper-parameters. However, unlike near on-policy algorithms for which hyper-parameters could be optimized via e.g. meta-gradients, similar techniques could not be straightforwardly applied to off-policy learning. In this work, we propose a framework which entails the application of Evolutionary Strategies to online hyper-parameter tuning in off-policy learning. Our formulation draws close connections to meta-gradients and leverages the strengths of black-box optimization with relatively low-dimensional search spaces. We show that our method outperforms state-of-the-art off-policy learning baselines with static hyper-parameters and recent prior work over a wide range of continuous control benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源