From unsupervised to supervised categorization in vision and haptics

Nina Gaißert, Christian Wallraven, Isabelle Bülthoff
Poster
Time: 2009-06-30  09:00 AM – 10:30 AM
Last modified: 2009-06-04

Abstract


Categorization studies have primarily focused on the visual percept of objects. But in every-day life humans combine percepts from different modalities. To better understand this cue combination and to learn more about the mechanisms underlying categorization, we performed different categorization tasks visually and haptically and compared the two modalities. All experiments used the same set of complex, parametrically-defined, shell-like objects based on three shape parameters (see figure and [Gaissert, N., C. Wallraven and H. H. Bülthoff: Analyzing perceptual representations of complex, parametrically-defined shapes using MDS. Eurohaptics 2008, 265-274]). For the visual task, we used printed pictures of the objects, whereas for the haptic experiments, 3D plastic models were generated using a 3D printer and explored by blindfolded participants using both hands.
Three different categorization tasks were performed in which all objects were presented to participants simultaneously. In an unsupervised task participants had to categorize the objects in as many groups as they liked to. In a semi-supervised task participants had to form exactly three groups. In a supervised task participants received three prototype objects (see figure) and had to sort all other objects into three categories defined by the prototypes. The categorization was repeated until the same groups were formed twice in a row. The amount of repetitions needed across modalities was the same, showing that the task was equally hard visually and haptically. For more detailed analyses we generated similarity matrices based on which stimulus was paired with which other stimulus. As a measure of consistency – within and across modalities as well as within and across tasks – we calculated cross correlations between these matrices (see figure). Correlations within modalities were always higher than across modalities. In addition, as expected, the more constrained the task, the more consistently participants grouped the stimuli. Critically, multi-dimensional scaling analysis of the similarity matrices showed that all three shape parameters were perceived visually and haptically in all categorization tasks, but that the weighting of the parameters was dependent on the modality. In line with our previous results, this demonstrates the remarkable robustness of visual and haptic processing of complex shapes.

Conference System by Open Conference Systems & MohSho Interactive Multimedia