论文标题
vrbubble:增强对社交虚拟现实中视觉障碍的人的化身的外围意识
VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality
论文作者
论文摘要
社会虚拟现实(VR)正在发展以远程社会化和协作。但是,由于关注视觉体验,视觉障碍的人(PVI)无法访问当前的社交VR应用程序。我们旨在通过增强PVI对周围的化身动态的外围意识来促进社会VR的可及性。我们设计了VRBubble,这是一种基于音频的VR技术,可根据社会距离提供周围的头像信息。基于霍尔的邻近理论,Vrbubble将社会空间与三个泡沫(亲密,对话和社交泡沫)划分为产生空间音频反馈,以区分不同泡沫中的头像并提供适当的头像信息。我们提供三种音频替代方案:耳塞,言语通知和现实的声音效果。 PVI可以为不同的化身,气泡和社会环境选择并结合其首选的反馈替代方案。我们在导航和对话环境中评估了Vrbubble和一个具有12个PVI的音频信标基线。我们发现,Vrbubble在导航过程中显着增强了参与者的头像意识,并在这两种情况下都启用了阿凡达识别。但是,在拥挤的环境中,Vrbubble被证明更加分散注意力。
Social Virtual Reality (VR) is growing for remote socialization and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI's peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall's proxemic theory, VRBubble divides the social space with three Bubbles -- Intimate, Conversation, and Social Bubble -- generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI in a navigation and a conversation context. We found that VRBubble significantly enhanced participants' avatar awareness during navigation and enabled avatar identification in both contexts. However, VRBubble was shown to be more distracting in crowded environments.