Multisensory Integration in Prelingually Deafened Adults with Cochlear Implants

Julie M Verhoff, Lynne E Bernstein
Talk
Time: 2009-06-30  02:30 PM – 02:45 PM
Last modified: 2009-06-04

Abstract


Prelingually deafened adults, that is, individuals who were either born deaf or lost their hearing prior to learning language, are typically not considered to be good candidates for a cochlear implant (CI) because their performance on auditory-only (AO) speech perception tests is significantly lower than is performance on the same tests by postlingually deafened adult CI recipients. However, AO word and sentence recognition scores do not fully represent the benefit from a CI, because degraded acoustic input can be integrated with visual speech information. A fundamental question is the extent to which integration is possible in individuals who have experienced lifelong impoverished or distorted auditory input.

Eighteen prelingually deafened adult CI users (21 to 55 years of age) with at least six months of experience using their implant, and English as their primary language, were tested on several measures of unisensory and multisensory perception. Participants were implanted in adolescence or as adults (13 to 55 years of age at implant). One of their tasks was open-set sentence identification under AO, visual-only (VO), and audiovisual (AV) conditions. Stimuli were in lists of sentences with equal expected mean scores. The participants responded to each sentence by typing on a computer keyboard the whole sentence or any words or parts of words they understood. Fourteen participants had good or excellent levels of AV integration. Their mean percent words correct scores were VO, 31% (range, 15-58%), AO, 27% (range, 1-74%), and AV, 67% (range, 16-91%). However, four participants demonstrated little integrative ability. Their mean percent words correct scores were VO, 15% (range, 7-29%), AO, 18% (range, 0-72%), and AV, 36% (range, 19-74%).

Participants were also tested on detection of a spoken acoustic “ba� [1] within an external noise paradigm [2] at four fixed noise levels (i.e., no noise; and 20, 40, and 60dB SPL white noise). A two-alternative fixed-choice adaptive staircase method was used in which the acoustic signal level was varied to obtain the 79.4% correct detection thresholds. Four conditions were tested, (1) AO, (2) audio with a vibrotactile pulse-train stimulus (AT), (3) audio with a rectangular visual stimulus (AVR), and (4) audio with visual speech (AVS). Two independent sources of potential multisensory threshold improvement were modeled, a sensory reduction in intrinsic noise and an increase in sampling efficiency. Intrinsic noise is the inherent noise in the sensory system and is theoretically stimulus-invariant. Efficiency is a measure of task-relevant stimulus information utilization. Results showed that multisensory detection efficiency was higher than AO efficiency but generally lower than comparable efficiencies of normal-hearing adults. Thus, the high levels of integration in the AV spoken sentence test were dissociated from the efficiencies in the detection paradigm. As might be expected, intrinsic noise was higher than normal. These results suggest that integrative multisensory enhancements in this prelingually deaf population vary as a function of the task. Research supported by NIH/NIDCD DC008308.


1. Bernstein, L.E., E.T. Auer, Jr., and S. Takayanagi. Speech Communication, 2004. 44(1-4): p. 5-18.
2. Legge, G.E., D. Kersten, and A.E. Burgess. Journal of the Optical Society of America A, 1987. 4(2): p. 391-404.

Conference System by Open Conference Systems & MohSho Interactive Multimedia