Weighting or selecting sensory inputs when memorizing body-turns: what is actually being stored?
Poster Presentation
Manuel Vidal
Max-Planck Institute for Biological Cybernetics
Daniel Berger
Max-Planck Institute for Biological Cybernetics Heinrich Bülthoff
Max-Planck Institute for Biological Cybernetics Abstract ID Number: 121 Full text:
PDF
Last modified: July 1, 2005
Abstract
Many previous studies focused on how humans integrate inputs provided by different modalities for the same physical property. Some claim that these are merged into a single amodal percept, others propose that we select the most relevant sensory input.
We designed an experiment in order to study whether we select or merge senses, and we investigated what is actually being stored and recalled in a reproduction task. Participants experienced passive whole-body yaw rotations with a corresponding rotation of the visual scene (limited lifetime star field) turning 1.5 times faster. Then they actively reproduced the same rotation in the opposite direction, with body, visual or both cues available.
When the gain was the same as during the presentation, reproduced angles with both cues were smaller than with visual cues only, larger than with body cues, and responses were more precise. This suggests that turns in both modalities (vision and body) are stored independently, and that the resulting fusion lies in between with a higher reliability. This provides evidence for near-optimal integration. Modifying the reproduction gain resulted in a larger change for body than for visual reproduced rotation, which indicates a visual dominance when a matching problem is introduced.
|
|
Learn more
about this
publishing
project...
|
|
|