Top-down influences on the detection and discrimination of spatially-distributed auditory-somatosensory events.
Holger Franz Sperdin, Céline Cappe, Micah M Murray
Poster
Time: 2009-07-02 09:00 AM – 10:30 AM
Last modified: 2009-06-04
Abstract
Simple reaction times (RTs) to auditory-somatosensory multisensory stimuli are facilitated beyond predictions of probability summation not only when stimuli are delivered to the same location, but also when they are widely separated either between left and right hemispaces (Murray et al., 2005 Cerebral Cortex) or between front and rear spaces (Zampini et al., 2007 Neuropsychologia). While this pattern of effects might on the one hand depend (at least partially) on the particular body surface stimulated, results at this point indicate that the absolute spatial position is not the determining factor in whether or not facilitative effects vary between spatially aligned and misaligned multisensory conditions (Tajadura-Jimenez et al., 2009 Neuropsychologia). One interpretation of these findings is that they provide insights regarding the likely spatial representations within the brain region(s) mediating the effects. Here we addressed the possibility that top-down and/or task-related influences can dynamically impact such spatial representations and by extension the extent to which facilitative multisensory effects will be observed. Participants performed a simple detection task in response to auditory, somatosensory (vibrotactile stimulation of the left or right index finger and thumb), or simultaneous auditory-somatosensory stimuli that in turn were either spatially aligned or misaligned (e.g. auditory stimulation to the left and somatosensory stimulation to the right). In addition to the simple detection task, we also informed the participants that they would be queried from time to time (25% of trials) as to whether or not a given stimulus in a given sensory modality had been presented to the left or right on the preceding trial. Four possible probes were computed (2X2; sound or vibration with left or right) used for the eight conditions (four unisensory and four multisensory). Probes could be spatially congruent or not, thus yielding 16 different conditions in total. In this way, we sought to have participants selectively attending to each spatial location, while nonetheless having them perform a simple detection task irrespective of spatial information. After a first set of analyses that failed to reveal a main effect of side of stimulation, data were collapsed across left-sided and right-sided presentations. Detection rates and RT were then analyzed with a repeated measures ANOVA using condition as the within subject factor (auditory, somatosensory, aligned, and misaligned). Subjects could reliably detect the stimuli >95% of the time with no reliable differences in detection rates across stimulus conditions (F(3,6)=1.42; p=0.33). By contrast, RTs significantly varied across conditions (F(3,6)=31.45; p<0.0005) and were faster to multisensory aligned and to multisensory misaligned conditions than either unisensory condition (p<0.003 in all cases). There was no evidence for a reliable difference between RTs to aligned and misaligned multisensory conditions (p>0.48). Preliminary analyses of the responses to probes inquiring about the spatial location of somatosensory stimuli suggest that performance sensitivity (d’) is impaired for multisensory trials when the stimuli were spatially misaligned relative to unisensory trials (t(8)=5.59; p<0.0006). Taken together our results would suggest that while task demands do not affect the detection of the stimuli (when either spatially aligned or misaligned), the indication of the spatial position of a stimulus did vary with such. These results suggest there to be multiple stages of auditory-somatosensory interactions that are differentially susceptible to influences of task demands on spatial processing.