Audio-visual integration during multisensory object categorization.

Sebastian Werner, Max Planck Institute for Biological Cybernetics / Department of Cognitive and Computational Psychophysics

Abstract
Tools or musical instruments are characterized by their form and sound. We investigated audio-visual integration during semantic categorization by presenting pictures and sounds of objects separately or together and manipulating the degree of information content. The 3 x 6 factorial design manipulated (1) auditory information (sound, noise, silence) and (2) visual information (6 levels of image degradation). The visual information was degraded by manipulating the amount of phase scrambling of the image (0%, 20%, 40%, 60%, 80%, 100%). Subjects categorized stimuli as musical instruments or tools. In terms of accuracy and reaction times (RT), we found significant main effects of (1) visual and (2) auditory information and (3) an interaction between the two factors. The interaction was primarily due to an increased facilitatory effect of sound for the 80% degradation level. Consistently across the first 5 levels of visual degradation, we observed RT improvements for the sound-visual relative to the noise- or silence-visual conditions. Corresponding RT distributions significantly violated the so-called race model inequality across the first 5 percentiles of their cumulative density functions (even when controlling for low-level audio-visual interactions). These results suggest that redundant structural and semantic information is not independently processed but integrated during semantic categorization.

Not available

Back to Abstract