Face/voice integration in monkey auditory cortex

Asif Ghazanfar, Princeton University

Abstract
Monkeys and humans recognize the correspondence between species-specific facial and vocal expressions, and these visual and auditory channels can be integrated into unified percepts to enhance detection and discrimination. What role sensory areas, such as the auditory cortex, play in such complex signal processing is poorly understood. To address this, we recorded neural activity in the auditory cortex of rhesus monkeys while they viewed vocalizing conspecifics. We found that the primate auditory cortex integrates facial and vocal signals through both enhancement and suppression of neural activity. This was true of both local field potential activity and spiking activity. The majority of multisensory responses were specific to face/voice integration, and the lateral belt region of auditory cortex showed a greater frequency of multisensory integration than the auditory core cortex. One possible source of face-specific visual information in the auditory cortex is the superior temporal sulcus (STS). To test this hypothesis, we recorded in both auditory cortex and the STS concurrently and have been exploring the nature of their interactions during unimodal versus bimodal vocal processing.

Not available

Back to Abstract