论文标题
通过肌肉协同作用的低排列模块化增强学习
Low-Rank Modular Reinforcement Learning via Muscle Synergy
论文作者
论文摘要
模块化增强学习(RL)通过为每个执行器学习策略,分散了对多关节机器人的控制。先前关于模块化RL的工作证明了其通过共同的执行策略控制形态上不同的代理的能力。但是,随着机器人的自由度(DOF)的增加,训练形态通用的模块化控制器成倍困难。通过人类中枢神经系统控制众多肌肉的方式,我们提出了一个面向协同的学习(太阳)框架,该框架利用了机器人控制中DOF的冗余性质。通过一种无监督的学习方法将执行器分组为协同作用,并学会了一种协同作用来控制同步中的多个执行器。这样,我们在协同级别上实现了低级控制。我们对各种机器人形态进行了广泛的评估,结果表明其效率和概括性较高,尤其是在具有大型DOF(如类人动物)++和Unimals的机器人上。
Modular Reinforcement Learning (RL) decentralizes the control of multi-joint robots by learning policies for each actuator. Previous work on modular RL has proven its ability to control morphologically different agents with a shared actuator policy. However, with the increase in the Degree of Freedom (DoF) of robots, training a morphology-generalizable modular controller becomes exponentially difficult. Motivated by the way the human central nervous system controls numerous muscles, we propose a Synergy-Oriented LeARning (SOLAR) framework that exploits the redundant nature of DoF in robot control. Actuators are grouped into synergies by an unsupervised learning method, and a synergy action is learned to control multiple actuators in synchrony. In this way, we achieve a low-rank control at the synergy level. We extensively evaluate our method on a variety of robot morphologies, and the results show its superior efficiency and generalizability, especially on robots with a large DoF like Humanoids++ and UNIMALs.