论文标题
迭代的监督学习对限制的回归学习
Iterative Supervised Learning for Regression with Constraints
论文作者
论文摘要
监督学习中的回归通常需要执行约束,以确保训练有素的模型与输入和输出数据的基础结构一致。本文提出了在任意约束下执行回归的迭代程序。它是通过在学习步骤和约束执行步骤之间交替来实现的,该步骤将仿射扩展功能纳入其中。我们表明这导致在轻度假设下进行收缩映射,从而可以通过该假设进行分析。在回归中与约束的收敛证明是本文的独特贡献。此外,与其他现有算法相比,数值实验在回归质量,约束的满意度以及训练的稳定性方面说明了训练模型的改进。
Regression in supervised learning often requires the enforcement of constraints to ensure that the trained models are consistent with the underlying structures of the input and output data. This paper presents an iterative procedure to perform regression under arbitrary constraints. It is achieved by alternating between a learning step and a constraint enforcement step, to which an affine extension function is incorporated. We show this leads to a contraction mapping under mild assumptions, from which the convergence is guaranteed analytically. The presented proof of convergence in regression with constraints is the unique contribution of this paper. Furthermore, numerical experiments illustrate improvements in the trained model in terms of the quality of regression, the satisfaction of constraints, and also the stability in training, when compared to other existing algorithms.