Single-object consistency facilitates multisensory pair learning: evidence for unitization
Elan Barenholtz, David Lewkowicz, Lauren Kogelschatz

Date: 2012-06-19 11:00 AM – 12:30 PM
Last modified: 2012-04-24

Abstract


Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent—and thus were consistent with belonging to a single personal identity—compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony—which provides a highly reliable alternative cue that properties derive from a single object—improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.

Conference System by Open Conference Systems & MohSho Interactive