论文标题

线性动力学系统的元学习在线控制

Meta-Learning Online Control for Linear Dynamical Systems

论文作者

Muthirayan, Deepan, Kalathil, Dileep, Khargonekar, Pramod P.

论文摘要

在本文中,我们考虑了找到一个元学习在线控制算法的问题,该算法可以在面对$ n $(类似)控制任务的序列时可以在整个任务中学习。每个任务都涉及控制$ t $时间步骤的有限视野的线性动力系统。在采取控制动作之前,每个时间步骤的成本函数和系统噪声是对抗性的,并且控制器未知。元学习是一种广泛的方法,其目标是为任何新的未见任务开出在线政策,从其他任务中利用信息以及任务之间的相似性。我们为控制设置提出了一种元学习的在线控制算法,并通过\ textit {meta-regret}来表征其性能,这是整个任务的平均累积后悔。 We show that when the number of tasks are sufficiently large, our proposed approach achieves a meta-regret that is smaller by a factor $D/D^{*}$ compared to an independent-learning online control algorithm which does not perform learning across the tasks, where $D$ is a problem constant and $D^{*}$ is a scalar that decreases with increase in the similarity between tasks.因此,当任务的顺序相似时,提出的元学习在线控制的遗憾显着低于没有元学习的幼稚方法。我们还提出了实验结果,以证明我们的元学习算法获得的出色性能。

In this paper, we consider the problem of finding a meta-learning online control algorithm that can learn across the tasks when faced with a sequence of $N$ (similar) control tasks. Each task involves controlling a linear dynamical system for a finite horizon of $T$ time steps. The cost function and system noise at each time step are adversarial and unknown to the controller before taking the control action. Meta-learning is a broad approach where the goal is to prescribe an online policy for any new unseen task exploiting the information from other tasks and the similarity between the tasks. We propose a meta-learning online control algorithm for the control setting and characterize its performance by \textit{meta-regret}, the average cumulative regret across the tasks. We show that when the number of tasks are sufficiently large, our proposed approach achieves a meta-regret that is smaller by a factor $D/D^{*}$ compared to an independent-learning online control algorithm which does not perform learning across the tasks, where $D$ is a problem constant and $D^{*}$ is a scalar that decreases with increase in the similarity between tasks. Thus, when the sequence of tasks are similar the regret of the proposed meta-learning online control is significantly lower than that of the naive approaches without meta-learning. We also present experiment results to demonstrate the superior performance achieved by our meta-learning algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源