Aurally aided visual search in depth using ‘real’ and ‘virtual’ crowds

Jason S Chan, Simon Dobbyn, Paul McDonald, Henry J Rice, Carol O'Sullivan, Fiona N Newell
Poster
Time: 2009-06-30  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


It is well known that a sound presented at the same location on the horizontal plane as a visual target can improve the detection of the target by guiding attention to that location (Perrott, Cisneros, McKinley, & D'-Angelo, 1996; Spence & Driver, 1996). We asked whether sound can affect search for a visual target presented in different depths. In separate experiments, we explored aurally aided visual search in 3-dimensional space using ‘real’ and ‘virtual’ environments. In Experiment 1, our visual scene consisted of an array of eight face images presented as an array of 4 faces images in a near horizontal location (i.e. within peripersonal space) and 4 images located in a far horizontal location. Each face image was paired with a loudspeaker. The participant’s task was to indicate whether the visual target face which was indicated by a flash of an LED light, was either ‘near’ or ‘far’. Sounds were presented simultaneously with the LED light but were either congruent or incongruent with the location of the target. In Experiment 2, we presented virtual scenes of people and the participant’s task was to locate a target individual in the visual scene. Congruent and incongruent virtual voice information, containing distance and direction location cues, were paired with the target. In both Experiments, we found that response times were facilitated by a congruent sound. Our findings suggest that sound can have a significant influence on locating visual targets presented in depth in both real and virtual displays and has implications for understanding crossmodal influences in spatial attention and also in the design of realistic virtual environments.

Conference System by Open Conference Systems & MohSho Interactive Multimedia