论文标题
可重新定位的AR:基于3D场景图的室内场景中的上下文感知的增强现实
Retargetable AR: Context-aware Augmented Reality in Indoor Scenes based on 3D Scene Graph
论文作者
论文摘要
在本文中,我们提出了可重新定位的AR,这是一个新颖的AR框架,它产生了AR体验,该AR体验意识到在各种真实环境中设置的场景环境,从而实现了虚拟世界和现实世界之间的自然相互作用。为此,我们表征了场景上下文与3D空间中对象之间的关系,而不是与坐标转换。由AR内容和由真实环境形成的上下文所假定的上下文,其中用户体验AR表示为抽象图表,即场景图。在RGB-D流中,我们的框架生成了一个体积图,在该图中集成了场景的几何信息和语义信息。此外,使用语义图,我们将场景对象作为定向边界框并估算其方向。通过这样的场景表示,我们的框架以在线方式构造了一个3D场景图,描述了AR真实环境的上下文。表示AR内容上下文的构造图与AR场景图之间的对应关系提供了语义注册的内容布置,从而有助于虚拟世界和现实世界之间的自然相互作用。我们通过定量评估定向边界框估计的性能,基于构造的3D场景图的AR内容布置以及在线AR演示的主观评估,对原型系统进行了广泛的评估。这些评估的结果表明了我们的框架的有效性,表明它可以在各种真实场景中提供背景感知的AR体验。
In this paper, we present Retargetable AR, a novel AR framework that yields an AR experience that is aware of scene contexts set in various real environments, achieving natural interaction between the virtual and real worlds. To this end, we characterize scene contexts with relationships among objects in 3D space, not with coordinates transformations. A context assumed by an AR content and a context formed by a real environment where users experience AR are represented as abstract graph representations, i.e. scene graphs. From RGB-D streams, our framework generates a volumetric map in which geometric and semantic information of a scene are integrated. Moreover, using the semantic map, we abstract scene objects as oriented bounding boxes and estimate their orientations. With such a scene representation, our framework constructs, in an online fashion, a 3D scene graph characterizing the context of a real environment for AR. The correspondence between the constructed graph and an AR scene graph denoting the context of AR content provides a semantically registered content arrangement, which facilitates natural interaction between the virtual and real worlds. We performed extensive evaluations on our prototype system through quantitative evaluation of the performance of the oriented bounding box estimation, subjective evaluation of the AR content arrangement based on constructed 3D scene graphs, and an online AR demonstration. The results of these evaluations showed the effectiveness of our framework, demonstrating that it can provide a context-aware AR experience in a variety of real scenes.