An Investigation of Perceptual Dependencies in Audiovisual Speech Perception

Nick Altieri, Noah Silbert, Lei Pei
Poster
Time: 2009-07-01  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


Ecological speech signals consist of both auditory and visual (lipreading) information. An important problem in cognitive psychology is determining whether the dimensions of perception, including the auditory and visual components of speech, are combined independently (e.g., Garner & Morton, 1969). To test whether the auditory and visual components are perceived independently, we implemented the statistical methodology of General Recognition Theory (GRT) (Ashby & Townsend, 1986), a multidimensional extension of signal detection theory. We carried out an identification experiment where the auditorially and visually articulated syllables of /be/ and /ge/ were combined in a 2 x 2 factorial design to yield four stimulus categories: (A_V) /be_be/, /be_ge/, /ge_be/, and /ge_ge/. The stimuli /be_ge/ and /ge_be/ elicit the classic McGurk fusions of de and bge respectively. Results obtained from model fitting indicate that the auditory and visual components of speech are perceived independently. However, marginal d’s and decision criteria can differ as a function of stimulus level.

Conference System by Open Conference Systems & MohSho Interactive Multimedia