Audio-visual integration of emotional processing: Evidence from event-related potentials

Julia Föcker, Brigitte Röder
Poster
Time: 2009-06-30  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


Three experiments were run to analyze audio-visual integration in emotional processing. In the first experiment, participants were instructed to categorize the emotion and rate the intensity of happy, sad, angry and neutral dynamic facial and vocal expressions while they had to attend either to the face or the voice. Emotional expressions were presented as a voice, as a face (unimodal) or as a combination of emotionally congruent (bimodal congruent) or emotionally incongruent (bimodal incongruent) face-voice pairs. Participants showed lower performance in the bimodal emotionally incongruent condition compared to the unimodal or bimodal emotionally congruent condition. This incongruency effect could be due to a response conflict and/or to an emotional conflict. To control this confound, the second experiment consisted of a modified version of experiment 1: Emotional conflict and response conflict were separated. Behavioral data showed shorter reaction times to bimodal emotionally congruent compared to bimodal emotionally incongruent trials even without response conflict. In a third experiment, the time course of audio-visual emotional integration was analyzed using event-related potentials (ERPs). Results suggest early incongruency effects: A more pronounced positivity for bimodal emotionally congruent trials compared to bimodal emotionally incongruent trials at 190-230 ms for both the attended and unattended modality with a fronto-central scalp distribution was found. This effect is not due to a lack of attentional processing since unimodal vocal and facial attention effects were observed at the same time window (190-230 ms). In sum, these results suggest uni- and crossmodal attention effects in audio-visual integration of emotions.

Conference System by Open Conference Systems & MohSho Interactive Multimedia