An embodied view of multisensory speech

Kevin Munhall, David Ostry, Charlie Schroeder, Asif A Ghazanfar

Time: 2009-06-30  10:30 AM – 12:30 PM
Last modified: 2009-06-04

Abstract


Central Theme
When two people talk, we listen to each other's words, and even visually process the facial movements, but a deep level of the communication goes much beyond that. It involves watching gaze and body posture and facial expressions as the words are being said. Indeed, how we hear and see is further influenced by our own bodily states. Thus, the meaning of a speech act is situated both in the body and in the social context, and this meaning engages neural processes that guide the subsequent actions of the interlocutors.


While the idea that communication is embodied and situated is widely acknowledged, there have been few attempts to bridge the epistemic gaps between different approaches to this problem. The aim of this symposium is to make an effort to close these gaps. We will present data which reveal the multiple levels and timescales that multisensory speech operate on, and show that the behavioral and neural levels potentially operate as unified, resonant system between at least two communicating individuals.


List of participants and brief abstracts


Kevin G. Munhall, Queen’s University
Kevin will present work on the different spatial and temporal scales on which visual and multisensory speech perception operate. Take one example: People naturally move their heads when they speak, and this rhythmic head motion conveys linguistic information. Head movements correlate strongly with the pitch and amplitude of the talker's voice and perceivers can better detect speech in noisy situations when natural head motion is present.


David J. Ostry, McGill University
David will review data showing that somatosensory signals from the facial skin and muscles of the vocal tract provide a rich source sensory input in speech production. This somatosensory input is important in guiding both speech motor learning and speech perception.


Charles E. Schroeder, Nathan Kline Institute & Columbia University
Charlie will present data supporting the hypothesis that the enhancement effects of vision on the perception speech operate through the ongoing oscillatory activity of local neuronal ensembles in the primary auditory cortex. These oscillations are 'predictively' modulated by visual input, so that related auditory input arrives during a high excitability phase and is thus amplified.


Asif A. Ghazanfar, Princeton University
Asif will review work which suggests that the temporal structure of auditory and visual communication signals matches, and perhaps resonates with, the structure of on-going oscillations in the temporal lobe. Specifically, the low frequency theta rhythm seems to be a key feature in linking both signalers and receivers in a communicative exchange.

Conference System by Open Conference Systems & MohSho Interactive Multimedia