Audio-visual object integration in human STS: Determinants of stimulus efficacy and inverse effectiveness.

Sebastian Werner, Uta Noppeney
Poster
Last modified: 2008-05-15

Abstract


Combining fMRI and psychophysics, we investigated the neural mechanisms underlying the integration of higher-order audio-visual object features. In a target detection and a semantic categorization task, we presented subjects with pictures and sounds of tools or musical instruments while factorially manipulating the relative informativeness (degradation) of auditory and visual stimuli. Controlling for integration effects of low-level stimulus features, our experiment reveals integration of higher-order audio-visual object information selectively in anterior and posterior STS regions. Across subjects, audio-visual BOLD-interactions within these regions were strongly subadditive for intact stimuli and turned into additive effects for degraded stimuli. Across voxels, the probability to observe subadditivity increased with the strength of the unimodal BOLD-responses for both degraded and intact stimuli. Importantly, subjects’ multi-sensory behavioural benefit significantly predicted the mode of integration in STS: Subjects with greater benefits exhibited stronger superadditivity. In conclusion and according to the inverse effectiveness principle that is determined by stimulus efficacy, we demonstrate that the mode of multi-sensory integration in STS depends on stimulus informativeness, the voxel-specific responsiveness to unimodal stimulus components and the subject-specific multi-sensory behavioural benefit in object perception. The relationship between BOLD-responses and behavioural indices show the functional relevance of super- and subadditive modes of multi-sensory integration.

Conference System by Open Conference Systems & MohSho Interactive Multimedia