Dissociating attention and audiovisual integration in the sound-facilitatory effect on metacontrast masking
Yi-Chia Chen, Su-Ling Yeh

Last modified: 2011-09-02

Abstract


In metacontrast masking, target visibility is impaired by a subsequent non-overlapping contour-matched mask, a phenomenon attributed to low-level processing. Previously we found that sound could reduce metacontrast masking (Yeh & Chen, 2010), and yet how it exerts its effect and whether the sound-triggered attention system plays a major role remains unsolved. Here we examine whether the sound-facilitatory effect is caused by alertness, attentional cueing, or audiovisual integration. Two sounds were either presented simultaneously with the target and the mask respectively, or one preceded the target by 100 ms and the other followed the mask 100 ms afterwards. No-sound and one-sound conditions were used for comparison. Participants discriminated the truncated part (up or down) of the target, with four target-to-mask SOAs (14ms, 43ms, 114ms, and 157ms) mixedly presented. Results showed that the attentional cueing effect was evident when compared to the one-leading sound condition. Additionally, selective (rather than overall) improvement in accuracy and RT as a function of SOA was found with synchronized sound than without, suggesting audiovisual integration but not alertness. The audio-visual integration effect is attributed to enhanced temporal resolution but not temporal ventriloquism.

References


Yeh, S.L., & Chen, Y. L. (2010). Crossmodal interaction in metacontrast masking. Journal of Vision, 10(7), 894

Conference System by Open Conference Systems & MohSho Interactive