Audiovisual integration of speech and non-speech objects: an ERP study
Poster Presentation
Riikka Möttönen
Laboratory of Computational Engineering, Helsinki University of Technology
Virpi Lindroos
Kaisa Tiippana
Mikko Sams
Abstract ID Number: 58 Full text:
Not available Last modified:
March 15, 2006
Presentation date: 06/20/2006 10:00 AM in Hamilton Building, Foyer
(View Schedule)
Abstract
Recent event-related potential (ERP) studies have shown that auditory N1 to audiovisual speech is suppressed compared with N1 to acoustic speech (Klucharev et al., 2003; Besle et al. 2004; van Wassenhove et al., 2005). In contrast, ERP studies using audiovisual non-speech stimuli have not found such suppressions, suggesting its specificity to audiovisual speech. However, as there have been no studies using both speech and non-speech stimuli, this issue has remained open. We recorded ERPs to acoustic (A), visual (V) and audiovisual (AV) stimuli in four conditions containing (1) A and V speech, (2) A speech and V non-speech, (3) A non-speech and V speech and (4) A and V non-speech. In the AV stimuli the onsets of A and V components were either synchronous or asynchronous (V onset preceded A onset by 200 ms, typical for natural AV speech). The subjects were faster in identifying synchronous AV targets than unimodal ones in all conditions. The auditory N1 to both synchronous and asynchronous AV stimuli (non-targets) was suppressed in the condition containing A and V speech. Such suppressions were not found in other conditions. The results suggest that the suppression of auditory N1 is generated by speech-specific multisensory integration mechanisms.
|
 |
Learn more
about this
publishing
project...
|
|
|