Decomposition of audiovisual interactions in event-related fields using independent component analysis

Melissa M Pangelinan, Erika K Hussey, Shelby N Wilson, David E Poeppel
Poster
Time: 2009-07-01  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


Introduction: Selective attention is fundamental to the localization of sensory information from different modalities. Several studies have reported multimodal interactions evident in amplitude and/or latency differences of event-related potentials (ERPs) (Eimer & Schröger, 1997; Molholm et al., 2002; Giard & Peronnet, 1999) and event-related fields (ERFs) (Shams et al., 2005). By using high-density recordings, the spatial topography of these event-related brain processes may be resolved. However, a particular waveform may be generated by multiple spatially-discrete neural sources. Thus, regional averaging techniques, commonly applied to reduce the dimensionality of high-density recordings, may not fully capture important aspects of the event-related dynamics derived from a large number of signals. Independent component analysis (ICA) is an ideal tool to track both the temporal and spatial dynamics of electroencephalographic (EEG) and magnetoencephalographic (MEG) data during different sensory, motor, and attention-related processes.

Purpose and Approach: The purpose of this study was two fold. First, we examined interactions between visual and auditory attention on behavioral performance and cortical activation patterns during spatial localization using traditional regional averaging ERF techniques. Second, we decomposed the event-related fields from the whole scalp using ICA infomax (Makeig et al., 1997) to confirm and isolate neural processes involved in the low-level sensory interactions, movement-related activity, and spatial attention.

Methods: Whole-head MEG was recorded from 10 right-handed participants during the performance of an audio-visual cued attention paradigm. During the baseline conditions, a visual stimulus (8.7° to the right or left of a central fixation) or an auditory stimulus (monaural to the right or left ear) was presented for 50ms. Participants were instructed to respond quickly and accurately to the location of target (left or right). During the cued conditions, visual or auditory cues were presented 750ms prior to the target stimuli. For 80% of the trials, the cue was presented in a congruent spatial location as the target. The spatial location (left versus right), sensory modality (auditory versus visual), and condition (baseline versus cued) were used as factors to examine differences in: 1) the behavioral responses (reaction time and accuracy), and 2) the ERFs.

Results: Preliminary results indicate that spatially-congruent auditory cues facilitated the reaction time and accuracy to targets in either modality, whereas visual cues impeded behavioral responses to auditory targets regardless of the spatial congruency of the cue. Consistent with these results, auditory cues modulated the MEG ERFs (specifically the M100 and M200 components) in the primary unisensory areas (particularly over the auditory cortex) during target presentation. ICA analysis confirms an interaction of auditory cues on responses to visual stimuli, as well as decomposed downstream motor processes and attention-related components. These findings suggest that low-level multisensory interactions may modulate downstream processing improving the quality of the behavioral responses (accuracy and speed).

Conference System by Open Conference Systems & MohSho Interactive Multimedia