论文标题

在线量子状态学习的更实用和自适应算法

More Practical and Adaptive Algorithms for Online Quantum State Learning

论文作者

Chen, Yifang, Wang, Xin

论文摘要

在线量子状态学习是Aaronson等人最近提出的问题。 (2018年),其中学习者根据对状态和嘈杂结果的给定测量值依次预测$ n $ qubit量子状态。在以前的工作中,算法通常是最佳的最佳选择,但在某些简单或更实际的情况下无法达到更严格的范围。在本文中,我们开发了算法以推进量子状态的在线学习。首先,我们证明具有Tallis-2熵的正规化关注者(RFTL)方法可以实现$ O(\ sqrt {mt})$总损失,并在第一个$ t $测量值的最高等级$ m $ $ m $ $ $ $ $的情况下,完美的事后损失。这种遗憾仅取决于测量值的最高等级$ m $,而不是利用低级测量值的量子数。其次,我们根据经典的调整学习率计划提出了一种无参数算法,该算法可以根据事后的最佳状态丧失,这可以使人遗憾,这利用了低嘈杂的结果。除了这些更具适应性的界限外,我们还表明,使用Tallis-2熵算法的RFTL可以在近期量子计算设备上有效地实现,这在以前的工作中是无法实现的。

Online quantum state learning is a recently proposed problem by Aaronson et al. (2018), where the learner sequentially predicts $n$-qubit quantum states based on given measurements on states and noisy outcomes. In the previous work, the algorithms are worst-case optimal in general but fail in achieving tighter bounds in certain simpler or more practical cases. In this paper, we develop algorithms to advance the online learning of quantum states. First, we show that Regularized Follow-the-Leader (RFTL) method with Tallis-2 entropy can achieve an $O(\sqrt{MT})$ total loss with perfect hindsight on the first $T$ measurements with maximum rank $M$. This regret bound depends only on the maximum rank $M$ of measurements rather than the number of qubits, which takes advantage of low-rank measurements. Second, we propose a parameter-free algorithm based on a classical adjusting learning rate schedule that can achieve a regret depending on the loss of best states in hindsight, which takes advantage of low noisy outcomes. Besides these more adaptive bounds, we also show that our RFTL with Tallis-2 entropy algorithm can be implemented efficiently on near-term quantum computing devices, which is not achievable in previous works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源