Crossmodal bias effects in perception of human body language
Poster Presentation
Jan Van den Stock
Cognitive and Affective Neuroscience Laboratory, Tilburg University, Tilburg, The Netherlands
Julie Grèzes
LPPA College de France, Paris, France Beatrice de Gelder
Cognitive and Affective Neuroscience Laboratory, Tilburg University, Tilburg, The Netherlands Abstract ID Number: 136 Full text:
Not available Last modified:
March 18, 2006
Presentation date: 06/19/2006 4:00 PM in Hamilton Building, Foyer
(View Schedule)
Abstract
Research on emotions expressed by the whole body has been sparse. In two experiments we measured recognition of whole body expressions of emotion in combination with emotional sound fragments. In the first experiment, subjects were presented a sentence spoken in an emotional tone of voice while at the same time a still image of a whole body expression was presented. The emotion expressed in the body was congruent or incongruent with that of the voice. Instructions required rating of the emotion in the voice. Results indicate that perception of the vocal expression is biased towards the body emotion.
In the second experiment, we combined dynamic images of body emotions with sound fragments. Sounds consisted of either human vocalisations or animal sounds (birds chirping and dogs barking). Preliminary testing indicated that emotions were equally well recognized when conveyed by human vocalisations and by animal sounds. Sound fragments were combined with emotionally congruent or incongruent video images. The task was to categorise the emotion expressed in the body. Results show that categorisation of body expression is influenced by incongruent human sounds but not by animal ones. The results indicate that semantic congruency by itself is not sufficient to explain crossmodal bias effects.
|
 |
Learn more
about this
publishing
project...
|
|
|