Evidence for a mechanism encoding audiovisual spatial separation
Emily Orchard-Mills, Johahn Leung, Maria Concetta Morrone, David Burr, Ella Wufong, Simon Carlile, David Alais

Last modified: 2011-08-22

Abstract


Auditory and visual spatial representations are produced by distinct processes, drawing on separate neural inputs and occurring in different regions of the brain. We tested for a bimodal spatial representation using a spatial increment discrimination task. Discrimination thresholds for synchronously presented but spatially separated audiovisual stimuli were measured for base separations ranging from 0° to 45°. In a dark anechoic chamber, the spatial interval was defined by azimuthal separation of a white-noise burst from a speaker on a movable robotic arm and a checkerboard patch 5° wide projected onto an acoustically transparent screen. When plotted as a function of base interval, spatial increment thresholds exhibited a J-shaped pattern. Thresholds initially declined, the minimum occurring at base separations approximately equal to the individual observer’s detection threshold and thereafter rose log-linearly according to Weber’s law. This pattern of results, known as the ‘dipper function’, would be expected if the auditory and visual signals defining the spatial interval converged onto an early sensory filter encoding audiovisual space. This mechanism could be used to encode spatial separation of auditory and visual stimuli.

Conference System by Open Conference Systems & MohSho Interactive