Combining sensory cues for spatial updating: The minimal sensory context to enhance mental rotations
Manuel Vidal, Alexandre Lehmann, Heinrich Bülthoff
Poster
Last modified: 2008-05-09
Abstract
Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint, which arises arise either from the test array rotation or the observer rotation. Several studies have reported that the cognitive cost of a mental rotation is reduced when the change in viewpoint result from the observer’s motion, which can be explained by the possibility to use spatial updating mechanism involved during self-motion. However, little is known about how this process is triggered and how the various sensory cues available might contribute to the updating performance. We used a virtual reality setup to study mental rotations that for the first time allowed investigating different combinations of modalities stimulated during the viewpoint changes. In an earlier study we validated this platform by replicating the classical advantage found for a moving observer (Lehmann, Vidal, & Bülthoff, 2007). In following experiments we showed: First, increasing the opportunities for spatial binding (by displaying the rotation of the tabletop on which the test objects lay) was sufficient to significantly reduce the mental rotation cost. Second, a single modality stimulated during the observer’s motion (Vision or Body) is not enough to trigger the advantage. Third, combining two modalities (Body & Vision or Body & Audition) significantly improves the mental rotation performance. These results are discussed in terms of sensory-independent triggering of spatial updating during self-motion, with additive effects when sensory modalities are co-activated. In conclusion, we propose a new sensory-based framework that can account for all of the results reported in previous work, including some apparent contradictions about the role of extra-retinal cues.