论文标题
用内核方法转移学习
Transfer Learning with Kernel Methods
论文作者
论文摘要
转移学习是指调整对源任务进行训练的模型的过程。虽然内核方法在概念和计算上是在各种任务上都具有竞争力的机器学习模型,但尚不清楚如何对内核方法执行转移学习。在这项工作中,我们通过将源模型投影并将其转换为目标任务来为内核方法提出转移学习框架。我们证明了我们框架在图像分类和虚拟药物筛查中应用中的有效性。特别是,我们表明,与直接在目标任务上使用相同的内核相比,在大规模图像数据集中进行训练的现代内核可能会增加性能。此外,我们表明转移学习的核可以更准确地预测药物对癌细胞系的影响。对于这两种应用,我们都会确定表征转移学习内核的性能作为目标示例数量的函数的简单缩放定律。我们在简化的线性环境中解释了这种现象,我们能够得出确切的缩放定律。通过为内核方法提供一个简单有效的传输学习框架,我们的工作使在大型数据集上训练的内核方法可以轻松适应各种下游目标任务。
Transfer learning refers to the process of adapting a model trained on a source task to a target task. While kernel methods are conceptually and computationally simple machine learning models that are competitive on a variety of tasks, it has been unclear how to perform transfer learning for kernel methods. In this work, we propose a transfer learning framework for kernel methods by projecting and translating the source model to the target task. We demonstrate the effectiveness of our framework in applications to image classification and virtual drug screening. In particular, we show that transferring modern kernels trained on large-scale image datasets can result in substantial performance increase as compared to using the same kernel trained directly on the target task. In addition, we show that transfer-learned kernels allow a more accurate prediction of the effect of drugs on cancer cell lines. For both applications, we identify simple scaling laws that characterize the performance of transfer-learned kernels as a function of the number of target examples. We explain this phenomenon in a simplified linear setting, where we are able to derive the exact scaling laws. By providing a simple and effective transfer learning framework for kernel methods, our work enables kernel methods trained on large datasets to be easily adapted to a variety of downstream target tasks.