Anatomically and functionally distinct regions within multisensory superior temporal sulcus differentially integrate temporally-asynchronous speech
Ryan Andrew Stevenson, Nicholas A. Altieri, Sunah Kim, Thomas W. James
Poster
Last modified: 2008-05-15
Abstract
While superior temporal sulcus is a known site of multisensory convergence (mSTS), it is a large structure with regions that respond differentially to select stimulus properties, such as facial movements, whole-body motion, and auditory speech. Previous studies have used several different methods of localizing mSTS. Here, we explored the possibility that these different methods may localize functionally-distinct, sensory-integrating sub-regions within a larger mSTS complex. Specifically, we compared two previously-used contrasts to identify mSTS: first, a contrast of synchronous versus asynchronous audio-visual speech trials, and second, a conjunction of two unisensory contrasts, audio speech > baseline and visual speech > baseline. Importantly, both of our contrasts identified regions of mSTS (among other previously found regions) however, these regions were anatomically distinct, with the synchrony-defined region superior and lateral to the audio-visual conjunction-defined region. Furthermore, the activation patterns in response to stimulus asynchronies differed between these regions. As expected, the synchrony-defined region responded preferentially to synchronous stimuli, but unexpectedly, responded only to synchronous speech, not to any level of asynchrony including 100ms offsets. The audio-visual conjunction-defined region, however, increased monotonically with asynchrony. We propose that mSTS is a complex comprised of a number of smaller, functionally distinct regions of multisensory convergence.