论文标题
部分可观测时空混沌系统的无模型预测
Order-optimal Correlated Rounding for Fulfilling Multi-item E-commerce Orders
论文作者
论文摘要
储层计算是预测湍流的有力工具,其简单的架构具有处理大型系统的计算效率。然而,其实现通常需要完整的状态向量测量和系统非线性知识。我们使用非线性投影函数将系统测量扩展到高维空间,然后将其输入到储层中以获得预测。我们展示了这种储层计算网络在时空混沌系统上的应用,该系统模拟了湍流的若干特征。我们表明,使用径向基函数作为非线性投影器,即使只有部分观测并且不知道控制方程,也能稳健地捕捉复杂的系统非线性。最后,我们表明,当测量稀疏、不完整且带有噪声,甚至控制方程变得不准确时,我们的网络仍然可以产生相当准确的预测,从而为实际湍流系统的无模型预测铺平了道路。
We study the dynamic fulfillment problem in e-commerce, in which incoming (multi-item) customer orders must be immediately dispatched to (a combination of) fulfillment centers that have the required inventory. A prevailing approach to this problem, pioneered by Jasin and Sinha (2015), is to write a ``deterministic'' linear program that dictates, for each item in an incoming multi-item order from a particular region, how frequently it should be dispatched to each fulfillment center (FC). However, dispatching items in a way that satisfies these frequency constraints, without splitting the order across too many FC's, is challenging. Jasin and Sinha identify this as a correlated rounding problem, and propose an intricate rounding scheme that they prove is suboptimal by a factor of at most $\approx q/4$ on a $q$-item order. This paper provides to our knowledge the first substantially improved scheme for this correlated rounding problem, which is suboptimal by a factor of at most $1+\ln(q)$. We provide another scheme for sparse networks, which is suboptimal by a factor of at most $d$ if each item is stored in at most $d$ FC's. We show both of these guarantees to be tight in terms of the dependence on $q$ or $d$. Our schemes are simple and fast, based on an intuitive idea -- items wait for FC's to ``open'' at random times, but observe them on ``dilated'' time scales. This also implies a new randomized rounding method for the classical Set Cover problem, which could be of general interest. We numerically test our new rounding schemes under the same realistic setups as Jasin and Sinha (2015) and find that they improve runtimes, shorten code, and robustly improve performance. Our code is made publicly available.