Influence of selective attention to sound in multisensory integration
Luis Morís Fernández, Maya Visser, Salvador Soto-Faraco

Date: 2012-06-21 01:30 PM – 03:00 PM
Last modified: 2012-04-25

Abstract


We assessed the role of audiovisual integration in selective attention by testing selective attention to sound. Participants were asked to focus on one audio speech stream out of two audio streams presented simultaneously at different pitch. We measured recall of words from the cued or the uncued sentence using a 2AFC at the end of each trial. A video-clip of the mouth of a speaker was presented in the middle of the display, matching one of the two simultaneous auditory streams (50% of the time it matched the cued sentence and the rest the uncued one). In experiment 1 the cue was 75% valid. Recall in the valid trials was better than in the invalid ones. The critical result was, however, that only in the valid condition we did find differences between audio-visuall matching and audio-visually mismatching sentences. On the invalid condition these differences were not found. In experiment 2 the cue to the relevant sentence was 100% valid, and we included a control condition where the lips didn’t match either of the sentences. When the lips matched the cued sentence performance was better than when they matched the uncued sentence or none of them, suggesting a benefit of audiovisual matching rather than a cost of mismatch. Our results indicate that attention to acoustic frequency (pitch) plays an important role in what sounds benefit from multisensory integration

Conference System by Open Conference Systems & MohSho Interactive