论文标题
功能性在线学习算法的能力依赖分析
Capacity dependent analysis for functional online learning algorithms
论文作者
论文摘要
本文提供了功能线性模型的在线随机梯度下降算法的收敛分析。采用坡度函数规律性,内核空间容量以及采样过程协方差运算符的能力的表征,可以实现收敛速率的显着提高。研究了预测问题和估计问题,我们表明,随着目标函数的规律性的增加,能力假设可以减轻收敛速率的饱和。我们表明,通过正确选择的内核,容量假设可以完全弥补预测问题的规律性假设(但不能用于估计问题)。这表明了功能数据分析中的预测问题与估计问题之间的显着差异。
This article provides convergence analysis of online stochastic gradient descent algorithms for functional linear models. Adopting the characterizations of the slope function regularity, the kernel space capacity, and the capacity of the sampling process covariance operator, significant improvement on the convergence rates is achieved. Both prediction problems and estimation problems are studied, where we show that capacity assumption can alleviate the saturation of the convergence rate as the regularity of the target function increases. We show that with properly selected kernel, capacity assumptions can fully compensate for the regularity assumptions for prediction problems (but not for estimation problems). This demonstrates the significant difference between the prediction problems and the estimation problems in functional data analysis.