Human trimodal perception follows optimal statistical inference

Ladan Shams, Ulrik R. Beierholm, David R. Wozny
Poster
Last modified: 2008-05-09

Abstract


Our nervous system typically processes signals from multiple sensory modalities at any given moment, and is therefore posed with two important problems: which of the signals are caused by a common event, and how to combine those signals. We investigated human perception in the presence of auditory, visual, and tactile stimulation in a numerosity judgment task. Observers were presented with stimuli in one, two, or three modalities simultaneously, and were asked to report their percepts in each modality. The degree of congruency between the modalities varied across trials. Crossmodal illusions were observed in most conditions in which there was incongruence among the two or three stimuli, revealing robust interactions among the three modalities in all directions. We compared the human observer responses with those of a simple normative Bayesian inference model that, in contrast to traditional models of cue combination, does not make an a priori assumption of fusion, and allows independent causes for the different signals. The observers’ bimodal and trimodal percepts were remarkably consistent with the model. The model contains 3 free parameters and it accounts for 95% of the variance in the data (208 data points). These findings provide evidence that the combination of sensory information among three modalities follows optimal statistical inference in a framework that allows fusion as well as segregation of stimuli.

Conference System by Open Conference Systems & MohSho Interactive Multimedia