Enhancement of vocal sound detection by facial view in the monkey

Yoshinao Kajikawa, Charles E Schroeder
Poster
Time: 2009-07-01  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


To study cortical mechanism of audio-visual (AV) integration of vocalization, we trained macaque monkey to perform the AV oddball task. Monkey initiates each trial by pulling a lever and maintaining gaze within a defined window on a monitor. In each trial, a series of “non-target� AV stimuli, composed of conspecific vocalization sounds + movie\image, are presented repetitively, interleaved with low-pass filtered or scrambled image for random duration between 600~1200 msec. Randomly after 3-6 non-targets, an AV target is presented. Targets differ from non-targets during the prior series in terms of both A and V, only V or only A, which ensures that monkey monitor both A and V. Monkey releases the lever upon detection of a target to obtain a reward. Using sound and image of simultaneous onsets, behavioral response of monkey showed multisensory facilitation; hit rate was highest for A+V change targets and lowest for A alone change targets. Reaction time (RT) was shortest for A+V change targets and longest for V alone change targets. By adding another condition using non-targets composed of only sound but still requiring bimodal monitor, it was revealed that facial view enhanced detection of A change target even at attenuated sound level at which monkey can barely detect A change when sound was presented alone.

Conference System by Open Conference Systems & MohSho Interactive Multimedia