Sound facilitates visual perceptual learning
Single Paper Presentation
Robyn Kim
UCLA, Psychology
Aaron Seitz
Boston University, Psychology Ladan Shams
UCLA, Psychology Abstract ID Number: 143 Full text:
Not available Last modified:
March 19, 2006
Presentation date: 06/21/2006 8:30 AM in Hamilton Building, McNeil Theatre
(View Schedule)
Abstract
Although performance on low-level visual perceptual tasks can be improved with training, such learning requires intensive practice. To investigate whether the addition of sound can facilitate visual perceptual learning, we compared coherent motion detection performance of an audio-visual trained group with that of a unisensory(visual)-trained group over ten days. On trials containing only visual signals, both groups improved with training; furthermore, improvement was specific to the trained motion directions and therefore likely due to low-level perceptual-learning. However, the audio-visual trained group exhibited both significantly faster learning within a session and also better retention of improvements across sessions than the unisensory-trained group when compared in trials containing only visual signal. Control conditions ruled out alerting effect of sound as the underlying factor. These results indicate that sound can indeed facilitate visual perceptual-learning. In addition, while both groups demonstrated an increase in audio-visual interaction from pre- to post-test, the change in interaction of the audio-visual-trained group significantly exceeded that of the unisensory-trained group, suggesting that there are both fast learning (within session) and sustained training effects for audio-visual interactions. Altogether these findings suggest that multisensory training is more effective for learning, and unisensory training is suboptimal even for learning unisensory tasks.
|
|
Learn more
about this
publishing
project...
|
|
|