论文标题

具有自适应数据驱动模型的深灰色盒建模,可值得对理论驱动模型的值得信赖的估计

Deep Grey-Box Modeling With Adaptive Data-Driven Models Toward Trustworthy Estimation of Theory-Driven Models

论文作者

Takeishi, Naoya, Kalousis, Alexandros

论文摘要

由于理论骨干的理论,我们称为深灰色盒建模的深神经网和理论驱动的模型的结合可以在某种程度上可以解释。通常以正则风险最小化学习深灰盒模型,以防止理论驱动的部分被深神经网覆盖和忽略。但是,当我们不确定哪种正规化程序适合给定数据时,对通过非严格优化正规器获得的理论驱动部分的估计几乎不值得信赖,这可能会损害可解释性。为了对理论驱动的部分进行值得信赖的估计,我们应该分析正规化者的行为,以比较不同的候选者并证明特定选择是合理的。在本文中,我们提出了一个框架,使我们能够在经验上分析正规机的行为,并以神经网的架构和培训目标发生略有变化。

The combination of deep neural nets and theory-driven models, which we call deep grey-box modeling, can be inherently interpretable to some extent thanks to the theory backbone. Deep grey-box models are usually learned with a regularized risk minimization to prevent a theory-driven part from being overwritten and ignored by a deep neural net. However, an estimation of the theory-driven part obtained by uncritically optimizing a regularizer can hardly be trustworthy when we are not sure what regularizer is suitable for the given data, which may harm the interpretability. Toward a trustworthy estimation of the theory-driven part, we should analyze regularizers' behavior to compare different candidates and to justify a specific choice. In this paper, we present a framework that enables us to analyze a regularizer's behavior empirically with a slight change in the neural net's architecture and the training objective.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源