Phonetic learning in audiovisual speech
Jean Vroomen
Talk
Time: 2009-07-02 11:10 AM – 11:30 AM
Last modified: 2009-06-04
Abstract
In the ventriloquist illusion, the perceived location of a target sound is displaced toward a light flash delivered simultaneously at some distance, despite instructions to ignore that flash. Moreover, if subjects are exposed to displaced sound/light flashes for some time, aftereffects in sound localization can be observed, as unimodal target sounds are shifted in the direction of the flashes seen during the preceding exposure phase. Presumably, this shift reflects an adjustment in the sound localization system that minimizes the discrepancy between the auditory and visual signals (i.e., recalibration). The same kinds of adaptive aftereffects are by now also well documented for audiovisual speech. For example, if an ambiguous sound halfway between /b/ and /d/ is dubbed onto the video of a face saying /b/, there is not only an immediate bias by the lipread information (i.e., subjects report to 'hear' /b/), but also an aftereffect as the once ambiguous sound is now identified as /b/ right away. In this example, it is thus the lipread information that 'teaches' the auditory system how to interpret the initially ambiguous sound. In my talk, I will compare recalibration with another phenomenon - 'selective speech adaptation'- that may look similar, but is nevertheless very different. I will present data showing that recalibration, - but not selective speech adaptation -, only occurs if the sound and lipread signal are assigned to the same phonetic event. Moreover, I will present EEG and fMRI data related to brain processes underlying recalibration.