Context information on the McGurk effect
masayo kajimura, haruyuki kojima, hiroshi ashida

Last modified: 2011-09-02

Abstract


We examined whether the McGurk effect depends on context. A syllable that is expected to produce the effect was embedded in Japanese simple sentences containing a three-syllable noun. The noun was either a real word or a non-word, including /ba/, /da/, or /ga/ as the second syllable. A stimulus consisted of an auditory speech sentence with a simultaneous video of its production by a speaker. Participants were asked to report the noun as they heard by filling in the blanks in printed sentences. The error rates in reporting the nouns were low when only the audio stimuli were presented. In the critical conditions, a voice and a video were congruently or incongruently combined and shown to participants, who judged the noun word by hearing. McGurk effect occurred in both real- and non-word conditions, but the error rates were larger for the real words. For the real words, the error rate was higher when the visual syllable matched the context. The rate of this context-matched errors was higher than the rate of comparable errors for non- words. This finding suggests that audio-visual integration in speech recognition is influenced by higher cognitive processes that are sensitive to the semantic context.

Conference System by Open Conference Systems & MohSho Interactive