论文标题
通过智能反复试验,在几次试验中找到游戏水平,并有正确的难度
Finding Game Levels with the Right Difficulty in a Few Trials through Intelligent Trial-and-Error
论文作者
论文摘要
动态难度调整的方法可以为特定玩家量身定制游戏以最大程度地提高其参与度。但是,当前的方法通常仅修改有限的游戏功能,例如对手的难度或资源的可用性。其他方法,例如经验驱动的程序内容产生(PCG),可以使用所需的属性(例如太难也不容易)产生完整的水平,但需要许多迭代。本文提出了一种可以在仅几个试验中生成和搜索具有特定目标难度的完整水平的方法。通过智能的试用算法,最初开发的是允许机器人快速适应这一进步。我们的算法首先创建了各种不同的级别,这些级别在预定义的维度(例如宽大处理或地图覆盖率)中有所不同。这些地图上的AI播放代理的性能为另一个AI代理的水平有多么困难(例如,使用蒙特卡洛树搜索而不是贪婪的树搜索)提供了代理;使用此信息,部署了贝叶斯优化过程,更新了先前地图以反映代理的能力的难度。该方法只能在几个试验中可靠地找到针对各种计划代理的特定目标难度的水平,同时保持对他们的技能景观的了解。
Methods for dynamic difficulty adjustment allow games to be tailored to particular players to maximize their engagement. However, current methods often only modify a limited set of game features such as the difficulty of the opponents, or the availability of resources. Other approaches, such as experience-driven Procedural Content Generation (PCG), can generate complete levels with desired properties such as levels that are neither too hard nor too easy, but require many iterations. This paper presents a method that can generate and search for complete levels with a specific target difficulty in only a few trials. This advance is enabled by through an Intelligent Trial-and-Error algorithm, originally developed to allow robots to adapt quickly. Our algorithm first creates a large variety of different levels that vary across predefined dimensions such as leniency or map coverage. The performance of an AI playing agent on these maps gives a proxy for how difficult the level would be for another AI agent (e.g. one that employs Monte Carlo Tree Search instead of Greedy Tree Search); using this information, a Bayesian Optimization procedure is deployed, updating the difficulty of the prior map to reflect the ability of the agent. The approach can reliably find levels with a specific target difficulty for a variety of planning agents in only a few trials, while maintaining an understanding of their skill landscape.