Visual and auditory spatial signals in naturalistic environments: a computationally-based analysis of functional imaging data

Cecile Bordier, Akitoshi Ogawa, Emiliano Macaluso

Last modified: 2013-05-05

Abstract


A major challenge for fMRI studies is to use realistic stimuli, which is relevant for the understanding of multisensory interactions in the real world. Computational models [Itti et al. 1998; Kayser et al. 2005] have been used to track brain activity during viewing of complex audio-visual stimuli [Bartels et al. 2007; Bordier et al. 2013]. We extended this approach to investigate activity associated with high-order spatial aspects in both vision and audition. We utilized a 3D-surround movie that included visual disparity cues and multiple sound sources (centre, front left/right, back left/right). For each visual frame, we computed a disparity map [Liu et al. 2011] and indexed absolute disparity (sum over the entire map) and gradient disparity (local contrast, Bordier et al. 2013). For audition, we indexed sound-spatiality by computing the correlation between each of the 5 external channels and a sixth channel that was delivered over headphones during fMRI (this contained primarily the “centre” sound). We also indexed the sound intensity contrast [Bordier et al. 2013] to control for mere intensity changes over time. These indexes were used to fit the BOLD signal in 16 subjects, who watched the 3D-surround movie during fMRI. The results showed a dissociation between absolute disparity (PPC and V3A) and gradient disparity (V6, STS and IFG). The auditory-spatiality index correlated with activity in auditory cortex, even after accounting for changes of auditory-intensity. We conclude that the combination of computational models and fMRI enables studying brain activity associated with complex spatial signals in naturalistic multisensory environments.

Keywords


Computational models; visual disparity; multiple sound sources

References


Bartels A., Zeki S. and Logothetis N.K. (2007) "Natural Vision Reveals Regional Specialization to local Motion and to Contrast Invariant, Global flow in the Human Brain", Cerebral Cortex, 18:105-117.

Bordier C., Puja F. and Macaluso E. (2013) "Sensory processing during viewing of cinematographic material: computational modeling and functional neuroimaging", NeuroImage, 67:213-226

Itti, L., Koch, C. and Niebur, E. (1998). “A model of saliency-based visual-attention for rapid scene analysis.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254–1259

Kayser C., Petkov C.I., Lippert M. and Logothetis NK. (2005) "Mechanisms for allocating auditory attention: an auditory saliency map." Current Biology, 15(21):1943-1947.

Liu C., Yuen J. and Torralba A. (2011) “SIFT Flow: Dense Correspondence across Different Scenes and Its Applications,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33(5):. 978-994.


Conference System by Open Conference Systems & MohSho Interactive