Audiovisual speech integration is modulated by the interpretation of the auditory stimuli: An ERP study

Jeroen Stekelenburg, Jean Vroomen
Poster
Time: 2009-06-30  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


Tuomainen et al. (Cognition, 2005) showed that the interpretation of auditory stimuli affects audiovisual (AV) speech integration. Perceptually ambiguous sine wave replicas (SWS) of natural speech were presented to listeners who were in speech mode (participants were trained to perceive the SWS stimuli as speech) or non-speech mode (participants were not aware that the auditory stimuli were speech). Audiovisual speech integration (lipreading biasing audition) was only observed for listeners in speech mode. Here, we examined the neural correlates of this effect using the “McGurk mismatch negativity (MMN) paradigm�. In an oddball sequence, ‘standards’ consisted of auditory /onso/ coupled with visual /onso/, and ‘deviants’ consisted of auditory /onso/ coupled with visual /omso/. A visual-only condition was run to rule out that the AV MMN was confounded by the visual part of the audiovisual deviant. Two groups, one in speech mode, the other in non-speech mode, were presented the SWS replicas of /onso/, while a third group heard the natural /onso/ speech token. The natural AV deviant induced the McGurk illusion triggering the automatic auditory change-detection system, as indexed by the MMN. For the SWS stimuli, an MMN was only evoked in the speech mode group (starting at about 180 ms), but not in the non-speech mode group. These results demonstrate that the modulation of audiovisual integration by the interpretation of the auditory stimuli takes place automatically at early sensory processing stages. Our study found evidence for the existence of a speech-specific multisensory mode of perception on the neural level.

Conference System by Open Conference Systems & MohSho Interactive Multimedia