Vocalization-context dependent neural representation of faces in monkey lateral prefrontal cortex

Joji Tsunada, Allison E Baker, Selina J Davis, Asif A Ghazanfar, Yale E Cohen
Poster
Time: 2009-07-01  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


In daily communication, we recognize communication signals (e.g., facial expressions and vocalizations) based on preceding communication contexts. In such a context-dependent recognition of communication signals, it is important to combine multi-modal communication signals. Neurons in the lateral prefrontal cortex (LPFC) are modulated by both auditory and visual communication signals and are involved in the monitoring of prior events. Therefore, the LPFC neurons are likely to be involved in the context-dependent processing of multi-modal communication signals. To address this hypothesis, we recorded local field potential (LFP) from the LPFC of rhesus monkeys while the monkeys listened to vocalizations or viewed silent movies of monkeys vocalizing. Specifically, the stimulus-paradigm began with the presentation of 3 - 5 repetitions of the same vocalization that was followed by the presentation of the silent movie. The repeated vocalization was a coo, grunt, or scream. The movie showed the facial movements of the monkey that were elicited by the vocalization. Importantly, all of the vocalization-movie stimuli came from the same monkey, eliminating any individual-based factors. The vocalization and the movie were either congruent (e.g., the vocalization was a coo and the movie showed a monkey cooing) or incongruent (e.g., the vocalization was a coo and the movie showed a monkey grunting). We analyzed 96 sites that showed a significant increasing in LFP (frequency range: 4 - 100 Hz) during the period of time that the movie was presented in the environment (movie-stimulus period). We found that the peak power and the latency to peak power of the LFP (4 - 50 Hz) during presentation of the vocalization (vocalization period) was modulated by both the type of vocalization and the number of preceding vocalizations. Also, the LFP (4 - 100 Hz) during the movie-stimulus period was modulated by both the type of vocalization and the number of vocalizations that preceded the presentation of the movie. Data to date suggest, however, that this modulation was not strictly dependent on whether or not the vocalization and face were congruent. Overall, these findings suggest that a neural system in the LPFC processes faces in a vocalization-context dependent manner. Such a mechanism in the LPFC may contribute to recognition of communication signals in a context-dependent format.

Conference System by Open Conference Systems & MohSho Interactive Multimedia