论文标题
星际争霸多代理挑战+:学习多阶段任务和环境因素,没有精确的奖励功能
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions
论文作者
论文摘要
在本文中,我们提出了一个名为“星际争霸多代理挑战+”的新颖基准,代理商学习执行多阶段任务并使用没有精确奖励功能的环境因素。以前的挑战(SMAC)被认为是多代理增强学习的标准基准,主要涉及确保所有代理人仅通过具有明显的奖励功能的精细操纵而合作消除接近对手。另一方面,这一挑战对MARL算法的探索能力有效地学习隐式多阶段任务和环境因素以及微控制感兴趣。这项研究涵盖了进攻和防守方案。在进攻情况下,代理商必须学会先寻找对手,然后消除他们。防御性场景要求代理使用地形特征。例如,代理需要将自己定位在保护结构后面,以使敌人更难攻击。我们研究了SMAC+下的MARL算法,并观察到最近的方法在与以前的挑战相似,但在进攻情况下表现不佳。此外,我们观察到,增强的探索方法对性能有积极影响,但无法完全解决所有情况。这项研究提出了未来研究的新方向。
In this paper, we propose a novel benchmark called the StarCraft Multi-Agent Challenges+, where agents learn to perform multi-stage tasks and to use environmental factors without precise reward functions. The previous challenges (SMAC) recognized as a standard benchmark of Multi-Agent Reinforcement Learning are mainly concerned with ensuring that all agents cooperatively eliminate approaching adversaries only through fine manipulation with obvious reward functions. This challenge, on the other hand, is interested in the exploration capability of MARL algorithms to efficiently learn implicit multi-stage tasks and environmental factors as well as micro-control. This study covers both offensive and defensive scenarios. In the offensive scenarios, agents must learn to first find opponents and then eliminate them. The defensive scenarios require agents to use topographic features. For example, agents need to position themselves behind protective structures to make it harder for enemies to attack. We investigate MARL algorithms under SMAC+ and observe that recent approaches work well in similar settings to the previous challenges, but misbehave in offensive scenarios. Additionally, we observe that an enhanced exploration approach has a positive effect on performance but is not able to completely solve all scenarios. This study proposes new directions for future research.