Visuo-haptic object representation is viewpoint-independent

Simon Lacey, Department of Neurology, Emory University School of Medicine

Abstract
Previous research (Newell et al., Psychological Science 12:37-42, 2001) suggests that visual and haptic object recognition is viewpoint-dependent both within- and cross-modally. The objects in this study were presented fixed along their extended y-axis. Thus haptic exploration favoured the back surface, and vision, the front surface. When objects were rotated, subsequent recognition showed an effect of viewpoint but only when the axis of rotation was such that the front and back surfaces were exchanged. In the present study, we removed this front/back advantage by presenting similar objects along their extended z-axis. Participants performed an object discrimination task in the two within-modal and the two cross-modal conditions with the objects presented in both unrotated and rotated (180 degrees) orientations. Subsequent recognition was reduced when objects were rotated compared to unrotated but the particular axis of rotation was unimportant, nor was there a main effect of modality overall. However, modality and rotation interacted such that the effect of rotation was confined to the within-modal conditions alone. These results suggest that cross-modal recognition is indeed viewpoint-independent and mediated by a high-level abstract representation that was obscured by the presentation axis selected in earlier work.

Not available

Back to Abstract