Unmasking the dichoptic mask by sound

Su-Ling Yeh, Yung-Hao Yang
Poster
Time: 2009-06-30  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


There is now burgeoning evidence for multisensory integration occurring at various levels of processing, but most studies concentrate on multisensory interactions using well-perceived stimuli. Here we examine whether multisensory interaction occurs with suprathreshold sound and invisible, dichoptically presented, visual stimuli, and whether sound helps the visual target to release from suppression. We used the continuous flash suppression paradigm in which a word presented to one eye was made completely invisible initially, due to strong suppression through binocular rivalry induced by dynamic-noise patterns shown to the other eye. Response latency of release from suppression by increased contrast of the word was measured and compared between different conditions. We found that a suprathreshold sound made the invisible word become more quickly visible in the dichoptic condition, but not when word and masks were superimposed and viewed binocularly. Upright words were detected faster than inverted ones only with dichoptic viewing but not with binocular viewing, and sound facilitated detection of the word only when they were spatially congruent. We suggest that sound helps to segregate two fused visual events (i.e., word and masks) and make the target more easily accessible to awareness.

Conference System by Open Conference Systems & MohSho Interactive Multimedia