论文标题

一个新的匪徒设置从状态进化和损坏的环境中平衡信息

A New Bandit Setting Balancing Information from State Evolution and Corrupted Context

论文作者

Galozy, Alexander, Nowaczyk, Slawomir, Ohlsson, Mattias

论文摘要

我们提出了一个新的顺序决策设置,将两个已建立的在线学习问题的关键方面与强盗反馈相结合。在任何给定时刻进行游戏的最佳动作取决于代理无法直接观察到的基本变化状态。每个州都与上下文分布相关联,可能损坏,使代理商可以识别状态。此外,各州以马尔可夫的方式发展,提供有用的信息来通过国家历史估算当前状态。在拟议的问题设置中,我们应对决定代理应选择其ARM选择的两种信息来源中的哪一个挑战。我们提出了一种算法,该算法使用裁判员将上下文匪徒的策略和多臂匪徒的策略组合在一起。我们通过迭代学习动作回报过渡模型来捕获状态的时间相关,从而有效探索了动作。我们的环境是由自适应移动健康(MHealth)干预措施的动机。用户通过不同的,相关的,但仅部分可观察到的内部状态来确定其当前需求。与每个内部状态相关的侧面信息可能并不总是可靠的,标准方法仅依赖于产生高度遗憾的上下文风险。同样,一些用户可能在随后的状态之间表现出较弱的相关性,从而导致仅依赖于同样风险的状态过渡的方法。我们根据遗憾的下限和上限分析了我们的设置和算法,并评估了我们在模拟药物依从性干预数据和几个现实世界数据集上的方法,与几种流行算法相比,经验性的提高了。

We propose a new sequential decision-making setting, combining key aspects of two established online learning problems with bandit feedback. The optimal action to play at any given moment is contingent on an underlying changing state which is not directly observable by the agent. Each state is associated with a context distribution, possibly corrupted, allowing the agent to identify the state. Furthermore, states evolve in a Markovian fashion, providing useful information to estimate the current state via state history. In the proposed problem setting, we tackle the challenge of deciding on which of the two sources of information the agent should base its arm selection. We present an algorithm that uses a referee to dynamically combine the policies of a contextual bandit and a multi-armed bandit. We capture the time-correlation of states through iteratively learning the action-reward transition model, allowing for efficient exploration of actions. Our setting is motivated by adaptive mobile health (mHealth) interventions. Users transition through different, time-correlated, but only partially observable internal states, determining their current needs. The side information associated with each internal state might not always be reliable, and standard approaches solely rely on the context risk of incurring high regret. Similarly, some users might exhibit weaker correlations between subsequent states, leading to approaches that solely rely on state transitions risking the same. We analyze our setting and algorithm in terms of regret lower bound and upper bounds and evaluate our method on simulated medication adherence intervention data and several real-world data sets, showing improved empirical performance compared to several popular algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源