论文标题

通过仿真进行凸离散优化的随机定位方法

Stochastic Localization Methods for Convex Discrete Optimization via Simulation

论文作者

Zhang, Haixiang, Zheng, Zeyu, Lavaei, Javad

论文摘要

我们通过具有凸结构的仿真问题开发和分析了一组新的顺序模拟优化算法,用于大规模多维离散优化。 “大规模”概念指的是,决策变量在每个维度上都有大量值可供选择。所提出的算法旨在识别具有任何给定概率的任何精确级别的最佳解决方案的解决方案。为了实现此目标,利用凸结构,我们的算法设计不需要扫描决策变量的所有选择,而是顺序绘制了决策变量的选择子集,并使用它们来“将它们定位为“本地化”,并将其“本地化”以适应性地缩小缩小区域。 为了显示本地化操作的力量,我们首先考虑一维大规模问题。我们提出了收缩的均匀采样算法,在渐近标准下,该算法可以通过最佳的预期模拟成本来实现目标。对于多维问题,我们将本地化的概念与亚级别信息结合在一起,并提出一个框架来设计随机的切割平面方法和降低尺寸降低算法,其预期模拟成本对规模和问题的尺寸的依赖性较低。所提出的算法不需要事先有关目标函数Lipschitz常数的信息,并且模拟成本的上限是由独立于Lipschitz常数的值进行的。最后,我们提出了一种自适应算法来处理系统的随机性是高斯的假设。我们在合成和排队模拟优化问题上实施了所提出的算法,与基准方法相比,表现出更好的性能。

We develop and analyze a set of new sequential simulation-optimization algorithms for large-scale multi-dimensional discrete optimization via simulation problems with a convexity structure. The "large-scale" notion refers to that the decision variable has a large number of values to choose from on each dimension. The proposed algorithms are targeted to identify a solution that is close to the optimal solution given any precision level with any given probability. To achieve this target, utilizing the convexity structure, our algorithm design does not need to scan all the choices of the decision variable, but instead sequentially draws a subset of choices of the decision variable and uses them to "localize" potentially near-optimal solutions to an adaptively shrinking region. To show the power of the localization operation, we first consider one-dimensional large-scale problems. We propose the shrinking uniform sampling algorithm, which is proved to achieve the target with an optimal expected simulation cost under an asymptotic criterion. For multi-dimensional problems, we combine the idea of localization with subgradient information and propose a framework to design stochastic cutting-plane methods and the dimension reduction algorithm, whose expected simulation cost have a low dependence on the scale and the dimension of the problems. The proposed algorithms do not require prior information about the Lipschitz constant of the objective function and the simulation costs are upper bounded by a value that is independent of the Lipschitz constant. Finally, we propose an adaptive algorithm to deal with the unknown noise variance case under the assumption that the randomness of the system is Gaussian. We implement the proposed algorithms on both synthetic and queueing simulation optimization problems, and demonstrate better performances compared to benchmark methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源