Effects of visual-auditory stimulus onset asynchrony on auditory event-related potentials in a speech identification task

Jeremy David Thorne, Stefan Debener
Poster
Last modified: 2008-05-13

Abstract


In natural situations such as audio-visual speech, visual (V) events often precede auditory (A) events. With N=17 normal hearing individuals we systematically manipulated the V to A stimulus onset asynchrony (SOA) in a speech phoneme identification task. SOA varied from 0 to 100ms in 20ms steps, enabling us to assess the impact of SOA on task performance. We recorded EEG from 68 channels while subjects performed the task, and analyzed differences in auditory evoked potentials (AEPs) between AV and A+V conditions. Behavioral effects were broadly in line with our previous results (Thorne & Debener, in press, Neuroreport), showing an improvement in response times to audiovisual stimuli across a range of SOAs when compared to auditory alone controls (F=2.2, p=.05). Greatest improvement was found at SOA = 80ms (F=5.5, p<.04). ANOVA of the AEP P2 latencies revealed significant main effects of SOA (F=13.7, p<.0001) and condition (AV versus A+V; F=14.9, p<.002), and a significant SOA-by-condition interaction (F=3.2, p<.02). Largest AV-(A+V) difference (9ms) was found specifically at SOA = 80ms (t=4.2, p<.001). We conclude that audio-visual integration may be optimally facilitated at certain delays, which may be related to the perceived distance of an object.

Conference System by Open Conference Systems & MohSho Interactive Multimedia