Bayesian Priors and Likelihoods are Encoded Independently in Human Multisensory Perception

Ladan Shams, Ulrik Beierholm
Talk
Last modified: 2008-05-13

Abstract


We studied human auditory-visual perception using a spatial localization task. In this task, as in many other tasks, while zero or small discrepancies between the two modalities leads to fusion, large discrepancies result in segregation (no interaction between the modalities), and moderate discrepancies result in moderate interaction. We quantitatively show that this combination of crossmodal information is highly consistent with a normative Bayesian model performing causal inference. Because we have a method of estimating priors and likelihoods, we could also ask whether the priors and likelihoods are encoded independently of each other in this task. Intuitively, priors represent a priori information about the environment, i.e., information available prior to encountering the given stimuli, and are thus not dependent on the current stimuli. While this interpretation is considered as a defining characteristic of Bayesian computation by many, the Bayes rule per se does not require that priors remain constant despite significant changes in the stimulus, and therefore, the demonstration of Bayes-optimality of a task does not imply the invariance of priors to varying likelihoods. We empirically investigated the inter-dependence of priors and likelihoods by strongly manipulating the presumed likelihoods (testing subjects with two different stimulus parameters one week apart) and examining whether the estimated priors change or remain the same. The results suggest that the estimated prior probabilities are indeed independent of the immediate input (likelihood), which further supports the hypothesis that human auditory-visual perception is a Bayesian inference process combining a priori information with the sensory estimates.

Conference System by Open Conference Systems & MohSho Interactive Multimedia