How vision can help audition: Speech recognition in noisy environments

Inga Schepers, Daniel Senkowski, Joerg F. Hipp, Andreas K. Engel
Poster
Last modified: 2008-05-13

Abstract


The impact of visual inputs on multisensory audiovisual speech recognition is pronounced under noisy environmental conditions. Here, we explored influences of visual signals on audiovisual speech recognition under varying degrees of auditory stimulus degradation. Multisensory audiovisual (AV) and unisensory-auditory (A) speech signals with no-degradation, low-degradation and high-degradation of auditory inputs were presented. Speech stimuli consisted of simple syllables and a target syllable had to be detected by the subjects. Event-related potentials (ERPs) to unisensory-A stimuli were subtracted from ERPs to multisensory-AV stimuli for each degradation condition (i.e., AV – A) and the resulting difference waves were compared. Effects of auditory signal degradation were observed in the 90-140 ms and 460-520 ms time intervals after sound onset over anterior scalp. The local-autoregressive approach (LAURA) was applied to explore the neuronal sources underlying these effects. For the early time interval, effects of auditory stimulus degradation were linked to activity in superior parietal cortex. Effects of auditory stimulus degradation in the late time interval were projected to parieto-occipital and temporal regions. Our results suggest that a widespread cortical network, which is engaged during multiple stages of information processing, is the neurophysiological basis for the important role of visual speech signals under noisy environmental conditions.

Conference System by Open Conference Systems & MohSho Interactive Multimedia