Dual-Tasking with Complex Stimuli Within and Between Sensory Modalities
Poster Presentation
Camille Koppen
Oxford University, Department of Experimental Psychology
*Charles Spence
Oxford University, Department of Experimental Psychology Abstract ID Number: 70 Full text:
PDF
Last modified: August 8, 2005
Abstract
Research into whether it is harder to divide attention between different stimuli that happen to be presented in the same versus different sensory modalities has elicited different answers. Some argue for independence between sensory modalities (Duncan, Martens, & Ward, 1997; Treisman & Davies, 1973; Wickens, 1980), while others support the existence of a common attentional resource (see Spence, Nicholls, & Driver, 2001 for a review). In addition, two lines of evidence suggest that attention may be compartmentalised according to different representational levels, the idea being that there may be separate perceptual and semantic attentional resources (Rapp & Hendel, 2003; see Marks, 2004 for discussion on Garner Interference; cf. also Wickens, 1980, for another theory of attentional compartmentalisation). The studies investigating these issues have typically used simple stimuli (i.e., alphanumeric characters, light flashes or brief vibrations). We tested these two hypotheses using a dual target detection paradigm with complex stimuli and varied the whether the two targets were in the same or different modalities (e.g., two auditory tasks, or one visual and one auditory task), and whether they were at the same or different level of representation (e.g., two semantic tasks, or one perceptual and one semantic task). We tested for crossmodal interference, and whether there are separate attentional resources for the perceptual and semantic levels of stimulus processing. Performance was better when the two tasks were at the same modalities than in different modalities, and at the same level of representation than at different levels. Hence, the results support the notion of a supramodal attentional resource.
|
|
Learn more
about this
publishing
project...
|
|
|