论文标题

Igibson 1.0:在大型现实场景中用于交互任务的模拟环境

iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes

论文作者

Shen, Bokui, Xia, Fei, Li, Chengshu, Martín-Martín, Roberto, Fan, Linxi, Wang, Guanzhi, Pérez-D'Arpino, Claudia, Buch, Shyamal, Srivastava, Sanjana, Tchapmi, Lyne P., Tchapmi, Micael E., Vainio, Kent, Wong, Josiah, Fei-Fei, Li, Savarese, Silvio

论文摘要

我们提出了Igibson 1.0,这是一种新型的仿真环境,用于开发用于大规模现实场景中的交互式任务的机器人解决方案。我们的环境包含15个完全互动的家庭大小场景,其中有108个房间,里面有刚性和铰接的物体。这些场景是现实世界中房屋的复制品,分布和与现实世界的对象的布局保持一致。 iGibson 1.0 integrates several key features to facilitate the study of interactive tasks: i) generation of high-quality virtual sensor signals (RGB, depth, segmentation, LiDAR, flow and so on), ii) domain randomization to change the materials of the objects (both visual and physical) and/or their shapes, iii) integrated sampling-based motion planners to generate collision-free trajectories for robot bases and arms, and iv)直观的人类界面,可以有效地收集人类示范。通过实验,我们表明场景的完整互动使代理能够学习有用的视觉表示,以加快下游操纵任务的训练。我们还表明,Igibson 1.0具有能够对导航剂的概括,并且人类igibson界面和集成运动计划者促进了对人类展示的有效模仿学习(移动)操纵行为。 Igibson 1.0是开源的,配备了全面的例子和文档。有关更多信息,请访问我们的项目网站:http://svl.stanford.edu/igibson/

We present iGibson 1.0, a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes. Our environment contains 15 fully interactive home-sized scenes with 108 rooms populated with rigid and articulated objects. The scenes are replicas of real-world homes, with distribution and the layout of objects aligned to those of the real world. iGibson 1.0 integrates several key features to facilitate the study of interactive tasks: i) generation of high-quality virtual sensor signals (RGB, depth, segmentation, LiDAR, flow and so on), ii) domain randomization to change the materials of the objects (both visual and physical) and/or their shapes, iii) integrated sampling-based motion planners to generate collision-free trajectories for robot bases and arms, and iv) intuitive human-iGibson interface that enables efficient collection of human demonstrations. Through experiments, we show that the full interactivity of the scenes enables agents to learn useful visual representations that accelerate the training of downstream manipulation tasks. We also show that iGibson 1.0 features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of human demonstrated (mobile) manipulation behaviors. iGibson 1.0 is open-source, equipped with comprehensive examples and documentation. For more information, visit our project website: http://svl.stanford.edu/igibson/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源