Updating Expectencies about Audiovisual Associations in Speech
Tim Paris, Jeesun Kim, Christopher Davis

Date: 2012-06-21 01:30 PM – 03:00 PM
Last modified: 2012-04-25

Abstract


The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this 'congruent' AV association and other 'incongruent' associations have commonly reported ERP differences from 350ms after sound onset. Using a 256 active electrode EEG system, we tested whether this 'congruency effect' would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500ms. However importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.

Conference System by Open Conference Systems & MohSho Interactive