Does Maximum Likelihood Integration Predict How we Perceive Walking Humans? A Study on the Audiovisual Integration of Biological Motion
Ana Catarina Mendonça, Jorge A Santos, Miguel Castelo-Branco
Poster
Time: 2009-06-30 09:00 AM – 10:30 AM
Last modified: 2009-06-04
Abstract
The MLI model has been shown to successfully predict multisensory integration processes given the reliability of each unimodal cue. But to what extent will it predict performance with behaviorally valid and semantically congruent stimuli is yet to be established. Here we test the theorem with familiar stimuli: walking humans and their step sounds . Biological motion perception is an inherently bimodal task, although there is barely no information on how we integrate both the visual and the auditory stimuli of this kind. These stimuli are potentially ambiguous and biased, as we tend to perceive them frontally, rather than backwards. It has been previously established that, given a highly ambiguous visual walker and easy to perceive human steps, the auditory clue becomes dominant (Mendonça & Santos, 2008). Here, we manipulated the amount of information conveyed by the auditory channel and analyzed how it interacted with the biased visual stimulus. We then confronted these results with the MLI prevision curves, based on the participant’s results from the unimodal conditions.
We presented visually biased point-light-walkers, masked step sounds , or both simultaneously. Stimuli could be perceived as moving towards or away from the subjects. Participants were asked to point were the stimuli was oriented to. Results revealed that in spite of the large individual differences (see Figure 1), the MLI model accurately predicted their outcome in the audiovisual condition.
Our data supports the hypothesis that we perceive biological motion in an optimal fashion, as both the visual and the auditory stimuli are weighted differently according to one’s interpretation and trust on each clue.
Figure 1: Results of two participants (P1 and P2) in the audiovisual condition, for back oriented stimuli. The x axis represents 4 different auditory stimuli, of decreasing difficulty, from A (the most masked) to D (less masked) . The y axis represents the percentage of “front� answers. As the auditory stimulus becomes easier, participants diminish the number of errors. The dashed lines present the MLI prevision curves for each participant, based on the unimodal results.
We presented visually biased point-light-walkers, masked step sounds , or both simultaneously. Stimuli could be perceived as moving towards or away from the subjects. Participants were asked to point were the stimuli was oriented to. Results revealed that in spite of the large individual differences (see Figure 1), the MLI model accurately predicted their outcome in the audiovisual condition.
Our data supports the hypothesis that we perceive biological motion in an optimal fashion, as both the visual and the auditory stimuli are weighted differently according to one’s interpretation and trust on each clue.
Figure 1: Results of two participants (P1 and P2) in the audiovisual condition, for back oriented stimuli. The x axis represents 4 different auditory stimuli, of decreasing difficulty, from A (the most masked) to D (less masked) . The y axis represents the percentage of “front� answers. As the auditory stimulus becomes easier, participants diminish the number of errors. The dashed lines present the MLI prevision curves for each participant, based on the unimodal results.