Learning novel visual to auditory sensory substitution algorithm (SSA) reveal fast, context dependent plasticity in multisensory perception system
uri hertz

Last modified: 2011-09-02

Abstract


Multisensory perception is a cognitive process involving multiple components and cognitive faculties. Cortical areas involved in the reception of unisensory input must take place along with other components such as previous world knowledge about audiovisual coupling, for example between speech and lips movement, or when looking for a specific audiovisual event. In order to examine multisensory integration under different context, without altering the physical stimuli, we used visual to auditory Sensory Substitution Algorithm (SSA) which translates visual images to auditory soundscapes, aimed to provide visual information to the blind. We employed a multisensory experimental design and spectral analysis to assess the effect of training with SSA on audiovisual perception. Subjects watched images and listened to soundscapes, which were getting in and out of synchronization, before and after an hour of training with the SSA. Subjects were then asked to look for an audiovisual combination of soundscape and image which creates an audiovisual plus (+) shape.
Our method uncovers unisensory and multisensory components of multisensory perception under different experimental conditions. We detect changes in amplitude of response which tells how strong a cortical area is associated with the component, and phase of response which convey the preferred latency of response within a stimulus block.
Our results demonstrate the fast and context dependent plasticity of multisensory integration system. Object detection network areas such as left IPS seems to rely mostly on visual input when auditory shape information cannot be conveyed, but demonstrate shift to context dependent multisensory characteristics. Other areas which are part of audiovisual integration, such as the right Insula, relay primarily on temporal relations between auditory and visual stimuli, and then shift to non phase locked response to auditory stimulus after learning SSA, when visual shape information is redundant. When an audiovisual plus detection task is introduced this area response is phase locked to the plus detection.
These results demonstrate the flexibility of audiovisual perception, its dependency on previous world knowledge and context, and the speed of which these changes take place.

Conference System by Open Conference Systems & MohSho Interactive