Electrophysiological characterization of multisensory facilitation effects in bimodal speech.
Poster
Virginie van Wassenhove
Neuroscience and Cognitive Science, University of Maryland
Ken W. Grant
Walter Reed Army Medical Center,Army Audiology and Speech Center David Poeppel
Neuroscience and Cognitive Science, University of Maryland Abstract ID Number: 93 Full text:
Not available
Last modified: June 8, 2003 Abstract
Auditory-visual (AV) speech integration was investigated in three experiments using electroencephalography (EEG). Supra-additive enhancement of early auditory cortical responses was predicted in light of general principles of multisensory integration.
Participants identified auditory (A), visual (V) and congruent and incongruent AV syllables, while undergoing EEG recording. In Experiments 1 and 2, participants answered according to what they heard while watching a video of the talker's face. In Experiment 3, participants responded according to the visual display that was mismatched with the auditory stimuli (McGurk and McDonald, 1976).
Contrary to the prediction, no supra-additive effect on the P1/N1/P2 complex was observed in bimodal speech. AV speech showed a statistically robust N1/P2 amplitude reduction compared to the audio alone condition. Moreover, attentional effects could not fully account for the observed reduction (Experiment 3). However, AV conditions did show significantly earlier N1/P2 peaks, suggesting a temporal facilitation effect on auditory speech processing speed in the presence of visual kinematics.
The results show that the presence of visual kinematics modulates the speed of auditory perceptual categorization. We hypothesize that the non-super-additivity derives from the spectro-temporal complexity and representational status of speech compared to simpler signals.
|