Time course of audio-visual phoneme identification: A cross-modal Gating study
Carolina Sánchez-García, Sonia Kandel, Christophe Savariaux, Nara Ikumi, Salvador Soto-Faraco

Date: 2012-06-21 01:30 PM – 03:00 PM
Last modified: 2012-04-27

Abstract


When both present, visual and auditory information are combined in order to decode the speech signal. Past research has addressed to what extent visual information contributes to distinguish confusable speech sounds, but usually ignoring the continuous nature of speech perception. Here we tap at the temporal course
of the contribution of visual and auditory information during the process of speech perception. To this aim, we designed an audio-visual Gating task with videos recorded with high speed camera. Participants were asked to identify gradually longer fragments of pseudowords varying in the central consonant. Different Spanish consonant phonemes with different degree of visual and acoustic saliency were included, and tested on visual-only, auditory-only and audio-visual trials. The data showed different patterns of contribution of unimodal and bimodal information during identification, depending on the visual saliency of the presented phonemes. In particular, for phonemes which are clearly more salient in one modality than the other, audiovisual performance equals that of the best unimodal. In phonemes with more balanced saliency, audio-visual performance was better than both unimodal conditions. These results shed new light on the temporal course of audio-visual speech integration.

Conference System by Open Conference Systems & MohSho Interactive