论文标题

通过离散信息瓶颈在深rl中的表示形式学习

Representation Learning in Deep RL via Discrete Information Bottleneck

论文作者

Islam, Riashat, Zang, Hongyu, Tomar, Manan, Didolkar, Aniket, Islam, Md Mofijul, Arnob, Samin Yeasar, Iqbal, Tariq, Li, Xin, Goyal, Anirudh, Heess, Nicolas, Lamb, Alex

论文摘要

已经提出了几种自我监督的表示方法,用于增强学习(RL),并具有丰富的观察。对于RL的实际应用,恢复潜在的潜在状态至关重要,尤其是当感觉输入包含无关紧要和外源信息时。在这项工作中,我们研究了如何使用信息瓶颈在存在任务征服信息的情况下有效地构建潜在状态。我们提出了利用变异和离散信息瓶颈的体系结构,以repdib的形式学习结构化分解表示。利用分解代表购买的表现力,我们引入了一种简单但有效的瓶颈,可以与任何现有的自我监督目标集成在一起的RL。我们在几个在线和离线RL基准测试中以及真正的机器人ARM任务中证明了这一点,我们发现使用Repdib的压缩表示可以改善性能,因为学识渊博的瓶颈有助于仅预测相关状态,同时忽略无关信息。

Several self-supervised representation learning methods have been proposed for reinforcement learning (RL) with rich observations. For real-world applications of RL, recovering underlying latent states is crucial, particularly when sensory inputs contain irrelevant and exogenous information. In this work, we study how information bottlenecks can be used to construct latent states efficiently in the presence of task-irrelevant information. We propose architectures that utilize variational and discrete information bottlenecks, coined as RepDIB, to learn structured factorized representations. Exploiting the expressiveness bought by factorized representations, we introduce a simple, yet effective, bottleneck that can be integrated with any existing self-supervised objective for RL. We demonstrate this across several online and offline RL benchmarks, along with a real robot arm task, where we find that compressed representations with RepDIB can lead to strong performance improvements, as the learned bottlenecks help predict only the relevant state while ignoring irrelevant information.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源