the breakdown of multisensory speech perception in autism and schizophrenia

John J. Foxe, Lars Ross, Dave saint-Amour, Victoria Leavitt, Daniella Blanco, Sophie Molholm
Talk
Time: 2009-07-01  02:20 PM – 02:40 PM
Last modified: 2009-06-04

Abstract


Viewing a speaker’s articulatory movements can greatly improve a listener’s ability to understand spoken words, and this is especially the case under noisy environmental conditions. We have shown that there is a very specific tuning function to this multisensory gain, with audiovisual enhancement showing its maximum effect at fairly specific signal-to-noise (SNR) ratios (Ross et al. 2007a, Ma et al., 2009). Thus, while multisensory gain is seen across a host of SNRs, it is also evident that there is a ‘special zone’ at a more intermediate SNRs (approximately –12dB) where multisensory integration is additionally enhanced. At these intermediate SNR levels, the extent of multisensory enhancement of speech-recognition is considerable, amounting to more than a threefold performance improvement relative to an auditory-alone condition. Our data show that the multisensory speech system develops this maximal tuning relatively slowly across the childhood years and that considerable tuning continues to occur into early adolescence. More recently, we have translated this basic knowledge into the clinical domain, testing multisensory speech perception in a cohort of patients with schizophrenia and also in a pilot study of high functioning autistic children. In the first of these studies, we assessed the ability to recognize auditory and audiovisual speech in different levels of noise in 18 patients with schizophrenia and compared their performance with that of 18 healthy volunteers (Ross et al., 2007b). We used a large set of monosyllabic words as our stimuli in order to more closely approximate performance in everyday situations. Patients with schizophrenia showed deficits in their ability to derive benefit from visual articulatory motion. Crucially, this impairment was most pronounced at the intermediate SNR levels where multisensory gain is maximally tuned in healthy control subjects. A surprising finding was that despite known early auditory sensory processing deficits and reports of impairments in speech processing in schizophrenia, patients' performance in unisensory auditory speech perception remained fully intact. The results showed a specific deficit in multisensory speech processing in the absence of any measurable deficit in unisensory speech processing and perhaps more interestingly, that this appeared to be mainly a result of a failure to tune the system appropriately. These data suggest that sensory integration dysfunction may be an important and, to date, rather overlooked aspect of schizophrenia. We have recently followed this work up in a small cohort of high functioning autistic children. Sensory integration dysfunction has long been speculated to be a core component of autism spectrum disorder but there has been precious little hard empirical evidence to support this notion. In this pilot study, we find considerable reduction in multisensory gain in ASD children relative to age and IQ matched controls. This deficit becomes progressively more pronounced at lower SNRs.
Ma WJ, Zhou X, Ross LA, Foxe JJ, Parra LC. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space. PLoS ONE. 2009;4(3):e4638.

Ross LA, Saint-Amour D, Leavitt VM, Javitt DC, & Foxe JJ. Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments. Cereb Cortex. 2007a May;17(5):1147-53.

Ross LA, Saint-Amour D, Leavitt VM, Molholm S, Javitt DC, & Foxe JJ. Impaired multisensory processing in schizophrenia: deficits in the visual enhancement of speech comprehension under noisy environmental conditions. Schizophr Res. 2007 Dec;97(1-3):173-83.

Conference System by Open Conference Systems & MohSho Interactive Multimedia