When are three cues better than two? Statistical robustness in combining information from vision, touch, and sound.

Carmel A. Levitan, Joint Graduate Group in Bioengineering at UCSF/UC Berkeley

Abstract
When multiple sensory signals are available concerning a given environmental property, the brain generally combines them in a nearly optimal way. When signals have similar values, human behavior is well predicted by a maximum-likelihood model in which the signals are weighted in inverse proportion to their variances. When the signals are quite different from one another, it may be more sensible to not combine because the conflicting information might be due to a faulty sensor or to the signals coming from different sources. We tested whether the brain manifests statistical robustness – the reduction in weight given to an outlier – in selecting which signals to combine. We examined spatial localization of targets that were visual, auditory, and haptic while varying the amount of conflict between the signals. When the conflicts were small, performance was consistent with the weighted-average model; this is an appropriate strategy because small conflicts are usually caused by measurement noise. When the conflicts were large and one signal specified a value quite different than the others, we observed robustness: the weight given to the outlier was significantly reduced. We conclude that the brain exhibits statistical robustness in combining signals from three sensory modalities.

Not available

Back to Abstract