Age differences in the pattern of benefit of audio-visual speech perception in younger and older adults

Natalie Phillips, Jean-Pierre Gagné, Madhavi Basu, Laura Copeland, Penny Gosselin, Arnaud Saint-Pierre, Axel Winneke
Talk
Time: 2009-07-01  03:40 PM – 04:00 PM
Last modified: 2009-06-04

Abstract


Background and Purpose: Older adults (OAs) perform more poorly than young adults (YAs) on speech understanding tasks, even with clinically normal audiograms. However, speech perception can be enhanced when one can both hear and see the speech cues produced by one’s communication partner. Using auditory (A) and visual (V) information to understand speech is referred to as audio-visual (AV) speech perception. The purpose of this study is to describe and understand aspects of bimodal speech integration in YAs and OAs by investigating the sensory, perceptual, and cognitive processes involved in AV speech perception.

Methods: Young (n=19; mean age = 22 yrs; 3 males) and older (n=19; mean age = 69 yrs; 3 males) normal hearing adults were asked to identify words terminating low and moderately constrained sentences under A-alone, V-alone, and AV conditions. A and AV conditions were presented in a multi-talker masking noise. In order to induce a similar perceptual load in both groups, the signal-to-noise ratio was titrated to produce a 50% error rate in the A-alone low context condition.

Results: Word identification improved across modality (V: ï¡¡ = 22%, A: 54%, AV: 87%, respectively; F(2,72)=753, p<.001), and there was a significant effect of Context (F(1,36)=21.8, p<.001), indicating improved performance for moderate- (ï¡¡ = 52%) versus low-constraint sentences (ï¡¡ = 58%). The Modality X Context interaction (F(2,72) = 14.4, p<.001) indicated an effect of context for the A and AV modalities only. Although young adults performed better than older adults overall (ï¡¡ = 58% vs. 51% respectively), both groups showed a significant AV improvement. When the visual enhancement effect was calculated as a function of the relative gain over A-alone performance, YAs showed a greater improvement over baseline (F(1,36) = 6.2, p=.018). We also tested memory performance for terminal words that were identified in our experimental sentences. There was a strong trend towards a significant Modality X Group interaction (F(2,62) = 2.76, p=.071). This revealed that OAs remembered more terminal words when they were presented in AV mode than V mode, a faciliatory effect of mode was not present in the YAs.

Conclusion: Our findings indicate that, although both age groups used visual speech cues to enhance speech perception, the younger adults show a greater benefit in word recognition per se. However, presenting speech bimodally facilitated memory for information for OAs but not for YAs. These results suggest that processing speech in the AV modality is less perceptually demanding and can free resources for higher order processes like memory.

Conference System by Open Conference Systems & MohSho Interactive Multimedia