Statistical learning of crossmodal associations is better than unisensory associations

Robyn Sun Kim, Aaron Seitz, Ladan Shams
Poster
Time: 2009-07-02  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


Background: The human brain is constantly engaged in learning new regularities and associations in the sensory environment. Statistical Learning studies have shown that this type of learning can even occur passively and in the absence of a task. We have recently demonstrated that learning of audio-visual sequences can occur in parallel and independently of unisensory auditory and unisensory visual learning (Seitz, Kim et al. 2007). Here we asked whether learning of unisensory and cross modal associations are equally efficient or if there is a difference between these types of learning. On the one hand, there is more connectivity within sensory regions, and therefore, establishing associations between tokens within the same modality may be more efficient. On the other hand, the noise processes that corrupt signals within the same sensory modality tend to be dependent, and these correlations due to noise may mask the meaningful correlations and make learning of within-modality associations more difficult. Purpose: In the current experiment, we directly compared learning of auditory-visual and visual-auditory sequential pairs with learning of visual-visual and auditory-auditory sequential pairs. Method: 36 naïve subjects were randomly assigned to one of four groups (A-V, V-A, A-A, or V-V). Each participant was first passively exposed to a stream of stimuli (see Figure, left panel). Unbeknownst to the subjects, the stream consisted of randomly ordered repetitions of four base-pair sequences. For the A-V, V-A, A-A, and V-V groups, base pairs consisted of a sound followed by an image (A-V), an image followed by a sound (V-A), a sound followed by another sound (A-A), or an image followed by another image (V-V), respectively. At the time of exposure, subjects were asked to observe the stimuli, and were not informed of the subsequent test. After exposure, participants were tested in a two-interval forced choice task in which they indicated which of two sequential stimulus pairs seemed more familiar. Results: A-V, V-A and A-A groups performed significantly above chance in the familiarity test, and the V-V group was marginally close to significance (p=.055). However, both crossmodal groups showed significantly better performance than both unisensory groups (see Figure, right panel). These data indicate that, indeed, auditory-visual sequential learning is superior to unisensory visual and unisensory auditory sequential learning.

Conference System by Open Conference Systems & MohSho Interactive Multimedia