Acquiring object affordances through touch, vision, and language
Argiro Vatakis, Katerina Pastra, Panagiotis Dimitrakis

Date: 2012-06-19 03:00 PM – 04:00 PM
Last modified: 2012-04-24

Abstract


We often use tactile-input in order to recognize familiar objects and to acquire information about unfamiliar ones. We also use our hands to manipulate objects and utilize them as tools. However, research on object affordances has mainly been focused on visual-input and, thus, limiting the level of detail one can get about object features and uses. In addition to the limited multisensory-input, data on object affordances has also been hindered by limited participant input (e.g., naming task). In order to address the above mention limitations, we aimed at identifying a new methodology for obtaining undirected, rich information regarding people’s perception of a given object and the uses it can afford without necessarily viewing the particular object. Specifically, 40 participants were video-recorded in a three-block experiment. During the experiment, participants were exposed to pictures of objects, pictures of someone holding the objects, and the actual objects and they were allowed to provide unconstrained verbal responses on the description and possible uses of the stimuli presented. The stimuli presented were lithic tools given the: novelty, man-made design, design for specific use/action, and absence of functional knowledge and movement associations. The experiment resulted in a large linguistic database, which was linguistically analyzed following a response-based specification. Analysis of the data revealed significant contribution of visual- and tactile-input in naming and definition of object-attributes (color/condition/shape/size/texture/weight), while no significant tactile-information was obtained for object-features of material, visual-pattern, and volume. Overall, this new approach highlights the importance of multisensory-input in the study of object affordances.

Conference System by Open Conference Systems & MohSho Interactive