Reconstruction of Spatial Cognition from Other's View and Motion Information
Kyo Hattori, Daisuke Kondo, Yuki Hashimoto, Tomoko Yonemura, Hiroyuki Iizuka, Hideyuki Ando, Taro Maeda

Last modified: 2011-09-02

Abstract


Spatial cognition requires integration of visual perception and self-motion. The lack of motion information in vision like image flows on TV often causes confusion of the spatial cognition and it becomes hard to understand where surrounding objects are. Our aim of this work is to investigate the mechanism of constructing spatial cognition from image flows and self-motion. In our experiments, we evaluated the performance of the spatial cognition abilities in VR environments according to the amounts of visual and motion information. Motion and visual information are closely related to each other since it is possible to estimate motion from the image flows of background. To modulate visual information, two kinds of head-mounted displays (32x24 and 90x45 deg.) were used. To compensate for motion information explicitly, a marker that indicates viewpoint movement of images was added in display and the effect was tested. As a result it is found that the maker significantly improved the performance and larger FOV could compensate for the lack of motion information however the effect was not significant when the marker was added. We will discuss these results with the image stabilization method to maintain a consistency between motion and image flows.

Conference System by Open Conference Systems & MohSho Interactive