论文标题
通过模型选择估算指数族中的回归函数
Estimating a regression function in exponential families by model selection
论文作者
论文摘要
令$ x_ {1} =(w_ {1},y_ {1}),\ ldots,x_ {n} =(w_ {n},y_ {n})$是$ n $ n $ n $ n $对独立随机变量。我们假设,对于每个$ i \ in \ {1,\ ldots,n \} $,给定$ y_ {i} $的条件分布给定的$ w_ {i} $属于一个单参数指数式,带有参数$ {\boldsymbolγ}^{\ star} {\ star} {或至少,与此形式的分布足够近。本文的目的是根据观察$ {\ boldsymbol {x}} =(x_ {1},\ ldots,x_ {n})估算这些条件分布,并为此,我们建议使用一个非求解风险的模型选择程序,以使其与非征服的风险限制,并限制了对结果的估计范围。当确实存在$ {\boldsymbolγ}^{\ star} $时,该过程允许获得$ {\boldsymbolγ}^{\boldsymbolγ}^{\ star} $ of EntiSotator $ \ wideHat {\boldsymbolγ} $,适用于广泛的Anisotropic besov space。当$ {\boldsymbolγ}^{\ star} $具有一般的添加剂或多个索引结构时,我们构建合适的模型,并通过基于此类模型的过程显示所得的估计器可以避免维度的诅咒。此外,我们考虑了Relu神经网络的模型选择问题,并提供了一个示例,其中基于神经网络的估计比经典模型要快得多。最后,我们将此过程应用于指数族的求解可变选择问题。本文中的证明依赖于几个功能集合的VC维度,这可能具有独立的关注。
Let $X_{1}=(W_{1},Y_{1}),\ldots,X_{n}=(W_{n},Y_{n})$ be $n$ pairs of independent random variables. We assume that, for each $i\in\{1,\ldots,n\}$, the conditional distribution of $Y_{i}$ given $W_{i}$ belongs to a one-parameter exponential family with parameter ${\boldsymbolγ}^{\star}(W_{i})\in{\mathbb{R}}$, or at least, is close enough to a distribution of this form. The objective of the present paper is to estimate these conditional distributions on the basis of the observation ${\boldsymbol{X}}=(X_{1},\ldots,X_{n})$ and to do so, we propose a model selection procedure together with a non-asymptotic risk bound for the resulted estimator with respect to a Hellinger-type distance. When ${\boldsymbolγ}^{\star}$ does exist, the procedure allows to obtain an estimator $\widehat{\boldsymbolγ}$ of ${\boldsymbolγ}^{\star}$ adapted to a wide range of the anisotropic Besov spaces. When ${\boldsymbolγ}^{\star}$ has a general additive or multiple index structure, we construct suitable models and show the resulted estimators by our procedure based on such models can circumvent the curse of dimensionality. Moreover, we consider model selection problems for ReLU neural networks and provide an example where estimation based on neural networks enjoys a much faster converge rate than the classical models. Finally, we apply this procedure to solve variable selection problem in exponential families. The proofs in the paper rely on bounding the VC dimensions of several collections of functions, which can be of independent interest.