Audiovisual depth perception in real and virtual environments

Jason S Chan, Carol O'Sullivan, Fiona N Newell
Poster
Last modified: 2008-05-13

Abstract


Depth perception has proven to be a significant hurdle for virtual reality. It has been demonstrated that there is a consistent underestimation of visual and auditory depth perception. However, these studies only explored depth perception through one modality. In this study we decided to take a multisensory approach. We conducted a 5x3x2 mixed design with Location (25m, 22.5m, 20m, 17.5m, 15m) and Modality (vision only vs. auditory only vs. audiovisual) as the within subjects factors and Environment (real vs. virtual) as the between subjects factor. Participants either saw, heard, or saw and heard a target location 10 seconds. The participants’ task was to bisect the distance between the start position and the target location while wearing a blindfold. In the real environment, participants saw the real target and heard the loudspeakers. In the virtual condition, the hallway and stimuli were presented via head mounted display. The auditory stimuli were created by recording the sounds through probe microphones to acquire the participants’ ear convolutions. Results show a clear underestimation of perceived distance in both unimodal conditions with larger underestimations on the auditory conditions. Performance in the multisensory condition fell between the performances from the two unimdodal conditions.

Conference System by Open Conference Systems & MohSho Interactive Multimedia