论文标题
多任务元学习:学习如何适应看不见的任务
Multi-Task Meta Learning: learn how to adapt to unseen tasks
论文作者
论文摘要
这项工作提出了多任务元学习(MTML),将两个学习范式多任务学习(MTL)和元学习整合在一起,以融合两全其美的世界。特别是,它集中精力学习多个任务,这是MTL的元素,并迅速适应了新任务,元素学习的质量。重要的是要强调,我们专注于与通常被考虑的均匀任务相比的异构任务(例如,如果所有任务都是分类,或者所有任务都是回归任务)。基本的想法是训练多任务模型,以便在引入看不见的任务时,它可以更少的步骤学习,同时至少提供与新任务或在MTL中包含的传统单个任务学习一样出色的性能。通过进行各种实验,我们在两个数据集和四个任务上证明了此范式:NYU-V2和我们执行语义分割,深度估计,表面正常估计和边缘检测的任务数据集。 MTML为NYU-V2数据集中的四个任务中的三个任务中的三个任务实现了最新结果,而Taskomony数据集则可以实现四分之二。在任务数据集中,发现许多伪标记的分割面罩缺乏预期在地面真理中存在的类别。但是,我们发现我们的MTML方法可有效检测这些缺失的类别,从而带来良好的定性结果。同时,由于存在不正确的地面真相标签,其定量性能受到影响。可重复性的源代码可以在https://github.com/ricupa/mtml-learn-how-to-to-to-to-to-to-unseen-tasks上找到。
This work proposes Multi-task Meta Learning (MTML), integrating two learning paradigms Multi-Task Learning (MTL) and meta learning, to bring together the best of both worlds. In particular, it focuses simultaneous learning of multiple tasks, an element of MTL and promptly adapting to new tasks, a quality of meta learning. It is important to highlight that we focus on heterogeneous tasks, which are of distinct kind, in contrast to typically considered homogeneous tasks (e.g., if all tasks are classification or if all tasks are regression tasks). The fundamental idea is to train a multi-task model, such that when an unseen task is introduced, it can learn in fewer steps whilst offering a performance at least as good as conventional single task learning on the new task or inclusion within the MTL. By conducting various experiments, we demonstrate this paradigm on two datasets and four tasks: NYU-v2 and the taskonomy dataset for which we perform semantic segmentation, depth estimation, surface normal estimation, and edge detection. MTML achieves state-of-the-art results for three out of four tasks for the NYU-v2 dataset and two out of four for the taskonomy dataset. In the taskonomy dataset, it was discovered that many pseudo-labeled segmentation masks lacked classes that were expected to be present in the ground truth; however, our MTML approach was found to be effective in detecting these missing classes, delivering good qualitative results. While, quantitatively its performance was affected due to the presence of incorrect ground truth labels. The the source code for reproducibility can be found at https://github.com/ricupa/MTML-learn-how-to-adapt-to-unseen-tasks.