Development of multimodal spatial integration and orienting behavior in humans
Poster
Patricia Neil
CNS, California Institute of Technology
Christine Chee-Ruiter
Division of Biology, California Institute of Technology Christian Scheier
California Institute of Technology David Lewkowicz
New York State Institute for Basic Research in Developmental Disabilities Shinsuke Shimojo
Division of Biology, California Institute of Technology and NTT Communication Science Labs, Japan Abstract ID Number: 144 Full text:
Not available
Last modified: May 20, 2003 Abstract
The spatial location of objects and events is often specified by concurrent auditory and visual inputs. Adults of many species, including humans, take advantage of such multimodal redundancy in spatial localization. Previous studies have shown adults respond more quickly and reliably to multimodal compared to unimodal stimuli localization cues. The current study investigated for the first time the development of audio-visual integration in spatial localization in infants 1-10 months of age. Infants were presented with a series of unimodal or spatially and temporally coincident bimodal lights and sounds, +/-25 and +/-45 degrees from center, and their head and eye orienting responses were measured from digital video records. Results showed that infants older than four months responded significantly faster to bimodal stimuli versus visual or auditory only stimuli whereas younger infants showed no enhancement in response latency for bimodal conditions. This is consistent with neurophysiological findings from multimodal sites in the superior colliculus of infant monkeys that multimodal enhancement of responsiveness is not present at birth but that it emerges during the first months of life. Additionally, we found age-dependent effects of position and modality on response latency supportive of multiple developmental stages preceding the onset of adult-type bimodal localization responses.
|