Combining sight sound and touch, in mature and developing humans

David Burr

Last modified: 2008-05-29

Abstract


Recently, the so called “Bayesian� (maximum likelihood estimate: MLE) approach has provided many useful insights into multi-sensory integration. In this talk I will the application of this technique to explaining the “ventriloquist effect�. Vision normally “captures� the position of sounds, but when visual stimuli are heavily blurred, sound can capture vision. All results are well predicted by assuming that the brain calculates a weighted sum of the auditory and visual signals, with weights proportional to the reliability of each signal. How quickly can the brain calculate appropriate weights for the integration? We took advantage of the fact that vision is impoverished for a brief but well defined period around the time of saccades, while auditory localization is unaffected. Using auditory probes during saccades, we have shown that both the perceived position and the precision of localizing a visuo-audio sources during saccades are well predicted by maximum likelihood estimation, and that the dynamics of the combination follow a characteristic and predictable timecourse. This result suggests that the brain can rapidly update perceptual weights to take into account dynamic changes in reliability. We also studied development of integration in school-age children using two tasks: a size judgment and orientation discrimination. In neither task did children below eight years of age integrate visual and haptic information: for the size task, haptic information dominated (although the precision from this source was worse than from vision), and for orientation discrimination vision (the least precise sense) dominated. We suggest that prior to 8 years of age, the different perceptual systems calibrate each other, at the expense of optimal integration.

Conference System by Open Conference Systems & MohSho Interactive Multimedia