Haptic-audio simulator for visually impaired indoor exploration

RIS ID

101382

Publication Details

Todd, C., Mallya, S., Majeed, S., Rojas, J. & Naylor, K. 2015, 'Haptic-audio simulator for visually impaired indoor exploration', Journal of Assistive Technologies, vol. 9, no. 2, pp. 71-85.

Abstract

Purpose - VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such as a room or building. The purpose of this paper is to aid in pre-planning and spatial awareness, for a user to become more familiar with the environment prior to experiencing it in reality. Design/methodology/approach - The system offers two unique interfaces: a free-roam interface where the user can navigate, and an edit mode where the administrator can manage test users, maps and retrieve test data. Findings - System testing reveals that spatial awareness and memory mapping improve with user iterations within VirtuNav. Research limitations/implications - VirtuNav is a research tool for investigation of user familiarity developed after repeated exposure to the simulator, to determine the extent to which haptic and/or sound cues improve a visually impaired user's ability to navigate a room or building with or without occlusion. Social implications - The application may prove useful for greater real world engagement: to build confidence in real world experiences, enabling persons with sight impairment to more comfortably and readily explore and interact with environments formerly unfamiliar or unattainable to them. Originality/value - VirtuNav is developed as a practical application offering several unique features including map design, semi-automatic 3D map reconstruction and object classification from 2D map data. Visual and haptic rendering of real-time 3D map navigation are provided as well as automated administrative functions for shortest path determination, actual path comparison, and performance indicator assessment: exploration time taken and collision data.

Please refer to publisher version or contact your library.

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1108/JAT-06-2014-0016