论文标题
ShapeFindar:使用混合现实探索原位空间搜索物理伪像检索
ShapeFindAR: Exploring In-Situ Spatial Search for Physical Artifact Retrieval using Mixed Reality
论文作者
论文摘要
通过Thingiverse之类的存储库使个人制造更容易访问,因为它们可以用检索代替建模。但是,他们要求用户将空间需求转换为关键字,这绘制了物理伪像的不完整图片:仅通过文本对比例或形态进行非平整编码。我们探讨了对(未来)物理伪像的原位空间搜索的愿景,并介绍了ShapeFindar,这是一种混合现实的工具,可使用与文本查询混合的原位草图搜索3D模型。使用ShapeFindar,用户在将搜索过程耦合到物理环境(例如,通过绘制位于原位,从存在的对象中提取搜索词或跟踪它们)时,搜索几何形状,而不一定是精确的标签。我们为HoloLens 2开发了ShapeFindar,该数据库连接到3D打印伪像的数据库。我们使用ShapeFindar指定原位空间搜索,描述其优势并进行现在的演练,这突出了用户表达自己的愿望的新颖方式,而无需复杂的建模工具或深刻的领域知识。
Personal fabrication is made more accessible through repositories like Thingiverse, as they replace modeling with retrieval. However, they require users to translate spatial requirements to keywords, which paints an incomplete picture of physical artifacts: proportions or morphology are non-trivially encoded through text only. We explore a vision of in-situ spatial search for (future) physical artifacts, and present ShapeFindAR, a mixed-reality tool to search for 3D models using in-situ sketches blended with textual queries. With ShapeFindAR, users search for geometry, and not necessarily precise labels, while coupling the search process to the physical environment (e.g., by sketching in-situ, extracting search terms from objects present, or tracing them). We developed ShapeFindAR for HoloLens 2, connected to a database of 3D-printable artifacts. We specify in-situ spatial search, describe its advantages, and present walkthroughs using ShapeFindAR, which highlight novel ways for users to articulate their wishes, without requiring complex modeling tools or profound domain knowledge.