Emotional incongruence of facial expression and voice tone investigated with event-related brain potentials of infants
Kota Arai, Yasuyuki Inoue, Masaya Kato, Shoji Itakura, Shigeki Nakauchi, Michiteru Kitazaki

Last modified: 2011-09-02

Abstract


Human emotions are perceived from multi-modal information including facial expression and voice tone. We aimed to investigate development of neural mechanism for cross-modal perception of emotions. We presented congruent and incongruent combinations of facial expression (happy) and voice tone (happy or angry), and measured EEG to analyze event-related brain potentials for 8-10 month-old infants and adults. Ten repetitions of 10 trials were presented in random order for each participant. Half of them performed 20% congruent (happy face with happy voice) and 80% incongruent (happy face with angry voice) trials, and the others performed 80% congruent and 20% incongruent trials. We employed the oddball paradigm, but did not instruct participants to count a target. The odd-ball (infrequent) stimulus increased the amplitude of P2 and delayed its latency for infants in comparison with the frequent stimulus. When the odd-ball stimulus was also emotionally incongruent, P2 amplitude was more increased and its latency was more delayed than for the odd-ball and emotionally congruent stimulus. However, we did not find difference of P2 amplitude or latency for adults between conditions. These results suggested that the 8-10 month-old infants already have a neural basis for detecting emotional incongruence of facial expression and voice tone.

References


de Gelder, B.,?B?cker, K. B.,?Tuomainen, J.,?Hensen, M.,?& Vroomen, J. (1999). The combined perception of emotion from voice and face: early interaction revealed by human electric brain responses. Neuroscience Letters, 260, 133--136.

de Gelder, B., & Vroomen, J. (2000). The perception of emotions by ear and by eye. Cognition & Emotion, 14, 289--311.

Pourtois, G., Debatisse, D., Despland, P. A., & de Gelder, B. (2002). Facial expressions modulate the time course of long latency auditory brain potentials. Cognitive Brain Research, 14, 99--105.

Pourtois, G., de Gelder, B., Bol, A., & Crommelinck, M. (2005). Perception of facial expressions and voices and of their combination in the human brain. Cortex, 41, 49--59.

Landy, M. S., Maloney, L. T., Johnston, E. B., & Young, M. (1995). Measurement and modeling of depth cue combination: in defense of weak fusion . Vision Research, 35, 389--412.

Conference System by Open Conference Systems & MohSho Interactive