The critical pre/post-event temporal range for stable crossmodal perception: Evidence from the stream/bounce display

Yousuke Kawachi, Michiaki Shibata, Hideaki Kawabata, Miho Kitamura, Jiro Gyoba
Poster
Time: 2009-06-30  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


We investigated the temporal range that the perceptual system develops for crossmodal interaction. In particular, we focused on the synergy between the temporal ranges before and after the crossmodal (audiovisual) event, leading to stable crossmodal interaction. We utilized the stream/bounce display in which two objects with crossing trajectories are perceived as either streaming through or bouncing off each other. Although the streaming percept is dominant, a sound burst at the moment the objects coincide was reported to induce the robust bouncing percept. The presentation of the sound burst or none was manipulated at the coincidence point. Additionally, the presentation durations of the moving objects were varied before and after the coincidence point (pre/post-coincidence duration: 50, 100, 200, 300 ms). Observers were asked to judge whether the two objects appeared to stream through or bounce off each other. The results revealed that, when the pre-coincidence duration was over 200 ms, the sound-induced bouncing was fully obtained in the post-coincidence duration of approximately 100 ms. Meanwhile, in the pre-coincidence duration of less than 100 ms, it took the post-coincidence duration of about 200 ms to fully obtain the bouncing percept. Therefore, we suggest that crossmodal perception is developed by the synergy effect between pre- and post-event processing.

Conference System by Open Conference Systems & MohSho Interactive Multimedia