Audiovisual integration of emotional signals from solo improvisation

Karin Petrini, Phil McAleer, Frank Pollick
Poster
Time: 2009-06-29  11:00 AM – 12:30 PM
Last modified: 2009-06-04

Abstract


Introduction
The multisensory nature of affect perception has scarcely been investigated, especially when it comes to music. In the present study we applied a paradigm often used in face-voice affect perception to music solo improvisation in order to examine how the emotional valence of sound and gesture are integrated when perceiving a ‘unique’ emotion.

Method
A set of short movies was obtained by asking two musicians (a drummer and a saxophonist) to improvise with their instrument and communicate happiness, sadness, anger, fear, disgust, surprise, and neutral (Figure 1a). Three emotions (anger, happiness and neutral) for the drummer and three (surprise, sadness and happiness) for the saxophonist were obtained by running a pilot experiment with 15 participants. These movies (i.e. congruent conditions) were manipulated to obtain audio only and video only versions, as well as a series of incongruent conditions. Forty-eight movies were shown to 20 musical novices, who judged the perceived emotion and rated the strength of each emotion.

Results
Participants perceived the intended emotion at a level above chance for all the 3 saxophone stimuli and for the drummer stimulus representing anger. Facilitation in perceiving the intended emotion was obtained for the congruent bimodal condition compared to the bimodal incongruent conditions (see Figure 1b for an example). Also a congruent bimodal facilitation was found when comparing this condition to audio only and video only, however the extent of this facilitation changed with instrument and emotion.

Conclusion
We integrate visual and auditory signals when perceiving emotion from music improvisation. However, the auditory signal appears to be ‘dominant’ in the perception of affective expression, except when the emotional valence of the auditory information is ambiguous. Concluding, the results are in line with those of face-voice affect perception, suggesting that a similar process subtends both means of communication when perceiving affect from multisensory events.

Conference System by Open Conference Systems & MohSho Interactive Multimedia