A neurocomputational model of cortical auditory-visual illusions
Cristiano Cuppini, Elisa Magosso, Mauro Ursino

Date: 2012-06-20 02:30 PM – 04:00 PM
Last modified: 2012-04-25

Abstract


The ability of the brain to integrate information from different sensory channels is fundamental to perceive the external world. Experimental findings suggest that multisensory interactions occur in the early processing stages in primary cortices, at the expense of the classical idea of independent sensory processing streams. However the underlying mechanisms are still not well-known.
The aim of the present endeavour was to develop a neural network model to analyse possible mechanisms and neural circuitries underlying perceptual audio-visual illusions, such as the Shams illusion and the ventriloquism effect. The model consists of two arrays of auditory and visual neurons topologically aligned. Neurons within each layer interact via excitatory and inhibitory lateral synapses, following a classical Mexican-hat disposition; moreover neurons in the two layers are reciprocally connected via one-to-one excitatory inter-area synapses. A fundamental point in the model is that the visual neurons exhibit a smaller spatial receptive field compared with the auditory ones (i.e., better spatial resolution) but a slower time constant (i.e., less accurate temporal precision). This is the only difference between the two areas.
Simulations suggest that classic illusions (Sham effect and ventriloquism) can be explained by assuming direct excitatory synapses among the two regions, without the need of feedback projections from higher-order integrative regions. Moreover the model ascribes the sham illusion to the better temporal resolution of the auditory processing compared with the visual one. Similarly, the better spatial resolution of visual processing can explain the ventriloquism effect, with the same model structure and the same parameter values.

Conference System by Open Conference Systems & MohSho Interactive