Temporal limits of within- and cross-modal cross-attribute bindings

Waka Fujisaki, Shin'ya Nishida
Poster
Last modified: 2008-05-09

Abstract


The temporal limit for judging the synchrony of two repetitive stimulus sequences is substantially lower across attributes processed in separate modules/modalities than within the same attribute. Although this suggests a general sluggishness of cross-attribute comparisons, the reported limit is not constant, but slightly higher for cross-modal judgments (~4Hz for audio-visual and tacto-visual judgments; ≥~8Hz for audio-tactile judgment) than for within-modal cross-attribute judgments (~2Hz for color-orientation and color-motion judgments). However, the cross-modal judgments used a synchrony task (e.g., discriminating synchrony/asynchrony between visual and auditory pulse sequences) in which the matching features could be uniquely selected by bottom-up segmentation, while the within-modal judgments used a binding task (e.g., judging color presented in synchrony with a specific orientation for alternations in color and orientation) in which matching features had to be selected by top-down attention. Here we compared the temporal limits of the two tasks for both within- and cross-modal cross-attribute judgments using three visual attributes (luminance, color, orientation), one auditory attribute (pitch), and one tactile (left/right hand) attribute. The results showed that the temporal limit was ~2Hz for the binding task, but ≥~4Hz for the synchrony task, regardless of the attribute combinations, suggesting the existence of a common cognitive bottleneck for cross-attribute binding tasks.

Conference System by Open Conference Systems & MohSho Interactive Multimedia