When hearing the bark helps to identify the dog: Semantically-congruent sounds modulate the identification of masked pictures

Yi-Chuan Chen, Charles Spence
Poster
Time: 2009-07-02  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


We report a series of five experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically congruent sound improved, whereas a semantically incongruent sound impaired, participants’ picture identification performance, as compared to the white-noise control condition. A significant facilitatory effect was also observed at SOAs around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the representations of visual and auditory stimuli can interact in a shared semantic system when they refer to a common object or event. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest the audiovisual semantic interactions likely occur in a short-term buffer which temporarily retains the semantic representations of multisensory stimuli in order to form a coherent multisensory representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.

Conference System by Open Conference Systems & MohSho Interactive Multimedia