Crossmodal semantic constraints on visual perception of binocular rivalry
Yi-Chuan Chen, Su-Ling Yeh, Charles Spence

Last modified: 2011-08-22

Abstract


Environments typically convey contextual information via several different sensory modalities. Here, we report a study designed to investigate the crossmodal semantic modulation of visual perception using the binocular rivalry paradigm. The participants viewed a dichoptic figure consisting of a bird and a car presented to each eye, while also listening to either a bird singing or car engine revving. Participants’ dominant percepts were modulated by the presentation of a soundtrack associated with either bird or car, as compared to the presentation of a soundtrack irrelevant to both visual figures (tableware clattering together in a restaurant). No such crossmodal semantic effect was observed when the participants maintained an abstract semantic cue in memory. We then further demonstrate that crossmodal semantic modulation can be dissociated from the effects of high-level attentional control over the dichoptic figures and of low-level luminance contrast of the figures. In sum, we demonstrate a novel crossmodal effect in terms of crossmodal semantic congruency on binocular rivalry. This effect can be considered a perceptual grouping or contextual constraint on human visual awareness through mid-level crossmodal excitatory connections embedded in the multisensory semantic network.

Conference System by Open Conference Systems & MohSho Interactive