论文标题

对结构化输入的本地潜在空间贝叶斯优化

Local Latent Space Bayesian Optimization over Structured Inputs

论文作者

Maus, Natalie, Jones, Haydn T., Moore, Juston S., Kusner, Matt J., Bradshaw, John, Gardner, Jacob R.

论文摘要

在深度自动编码器模型(DAE)的潜在空间上的贝叶斯优化最近已成为一种有希望的新方法,用于优化挑战性的黑盒功能,而不是结构化的,离散的,难以超越的搜索空间(例如分子)。在这里,DAE通过将输入映射到一个连续的潜在空间中可以极大地简化搜索空间,在该空间中,熟悉的贝叶斯优化工具可以更容易地应用。尽管进行了简化,但潜在空间通常仍保持高维度。因此,即使有一个合适的潜在空间,这些方法也不一定提供完整的解决方案,而是可能将结构化优化问题转移到高维问题上。在本文中,我们提出了LOL-BO,该论文适应了在最新的高维贝叶斯优化对结构化设置的最新工作中探讨的信任区域的概念。通过对编码器进行重新构造以作为全球DAE的编码器的功能,也可以作为信任区域内代理模型的深内核,我们可以更好地使潜在空间中局部优化的概念与输入空间中的局部优化相结合。 LOL-BO在六个现实世界中的最先进的贝叶斯优化方法中取得了多达20倍的进步,这表明优化策略的改善与开发更好的DAE模型一样重要。

Bayesian optimization over the latent spaces of deep autoencoder models (DAEs) has recently emerged as a promising new approach for optimizing challenging black-box functions over structured, discrete, hard-to-enumerate search spaces (e.g., molecules). Here the DAE dramatically simplifies the search space by mapping inputs into a continuous latent space where familiar Bayesian optimization tools can be more readily applied. Despite this simplification, the latent space typically remains high-dimensional. Thus, even with a well-suited latent space, these approaches do not necessarily provide a complete solution, but may rather shift the structured optimization problem to a high-dimensional one. In this paper, we propose LOL-BO, which adapts the notion of trust regions explored in recent work on high-dimensional Bayesian optimization to the structured setting. By reformulating the encoder to function as both an encoder for the DAE globally and as a deep kernel for the surrogate model within a trust region, we better align the notion of local optimization in the latent space with local optimization in the input space. LOL-BO achieves as much as 20 times improvement over state-of-the-art latent space Bayesian optimization methods across six real-world benchmarks, demonstrating that improvement in optimization strategies is as important as developing better DAE models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源