论文标题

线性表示的可证明的元学习

Provable Meta-Learning of Linear Representations

论文作者

Tripuraneni, Nilesh, Jin, Chi, Jordan, Michael I.

论文摘要

元学习或学习到学习,试图设计可以利用以前的经验来快速学习新技能或适应新环境的算法。表示学习 - 执行元学习的关键工具 - 学习一个可以跨多个任务转移知识的数据表示,这在数据稀缺的情况下至关重要。尽管最近对元学习实践引起了人们的兴趣,但缺乏元学习算法的理论基础,尤其是在学习可转移表示的背景下。在本文中,我们关注多任务线性回归的问题 - 其中多个线性回归模型共享一个常见的低维线性表示。在这里,我们提供了可证明的快速,样本效率的算法,以解决(1)从多个相关任务中学习一组通用功能的双重挑战,以及(2)将此知识转移到新的,看不见的任务中。两者都是元学习的一般问题的核心。最后,我们通过为学习这些线性特征的样本复杂性提供信息理论下限来补充这些结果。

Meta-learning, or learning-to-learn, seeks to design algorithms that can utilize previous experience to rapidly learn new skills or adapt to new environments. Representation learning -- a key tool for performing meta-learning -- learns a data representation that can transfer knowledge across multiple tasks, which is essential in regimes where data is scarce. Despite a recent surge of interest in the practice of meta-learning, the theoretical underpinnings of meta-learning algorithms are lacking, especially in the context of learning transferable representations. In this paper, we focus on the problem of multi-task linear regression -- in which multiple linear regression models share a common, low-dimensional linear representation. Here, we provide provably fast, sample-efficient algorithms to address the dual challenges of (1) learning a common set of features from multiple, related tasks, and (2) transferring this knowledge to new, unseen tasks. Both are central to the general problem of meta-learning. Finally, we complement these results by providing information-theoretic lower bounds on the sample complexity of learning these linear features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源