Physical and perceptual factors determine the mode of audio-visual integration in distinct areas of the speech processing system
Hwee-Ling Lee, Johannes Tuennerhoff, Sebastian Werner, Chandrasekharan Pammi, Uta Noppeney
Poster
Last modified: 2008-05-09
Abstract
Speech and non-speech stimuli differ in their (i) physical (spectro-temporal structure) and (ii) perceptual (phonetic/linguistic representation) aspects. To dissociate these two levels in audio-visual integration, this fMRI study employed original spoken sentences and their sinewave analogues that were either trained and perceived as speech (group 1) or non-speech (group 2). In both groups, all stimuli were presented in visual, auditory or audiovisual modalities. AV-integration areas were identified by superadditive and subadditive interactions in a random effects analysis. While no superadditive interactions were observed, subadditive effects were found in right superior temporal sulci for both speech and sinewave stimuli. The left ventral premotor cortex showed increased subadditive interactions for speech relative to their sinewave analogues irrespective of whether they were perceived as speech or non-speech. More specifically, only familiar auditory speech signal suppressed premotor activation that was elicited by passive lipreading in the visual conditions, suggesting that acoustic rather than perceptual/linguistic features determine AV-integration in the mirror neuron system. In contrast, AV-integration modes differed between sinewave analogues perceived as speech and non-speech in bilateral anterior STS areas that have previously been implicated in speech comprehension. In conclusion, physical and perceptual factors determine the mode of AV-integration in distinct speech processing areas.