- Browse by Subject
Browsing by Subject "speech perception"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item AUDIOVISUAL INTEGRATION OF SPEECH BY CHILDREN AND ADULTS WITH COCHEAR IMPLANTS(Institute of Electrical and Electronics Engineers, 2002) Kirk, Karen Iler; Pisoni, David B.; Lachs, Lorin; Department of Otolaryngology--Head & Neck Surgery, School of MedicineThe present study examined how prelingually deafened children and postlingually deafened adults with cochlear implants (CIs) combine visual speech information with auditory cues. Performance was assessed under auditory-alone (A), visual- alone (V), and combined audiovisual (AV) presentation formats. A measure of visual enhancement, RA, was used to assess the gain in performance provided in the AV condition relative to the maximum possible performance in the auditory-alone format. Word recogniton was highest for AV presentation followed by A and V, respectively. Children who received more visual enhancement also produced more intelligible speech. Adults with CIs made better use of visual information in more difficult listening conditions (e.g., when mutiple talkers or phonemically similar words were used). The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.Item A Clinical Tool for Assessing Infants' Auditory Acuity(Office of the Vice Chancellor for Research, 2010-04-09) Houston, DerekMy translational research involves developing tools to assess speech perception and language skills during infancy. Language assessment during infancy helps identify children who are at risk for language disorders so that they can receive appropriate intervention as soon as possible. It also allows speech-language pathologists and other clinicians evaluate the success of their therapy and intervention strategies.Item A cross-linguistic fMRI study of perception of intonation and emotion in Chinese(Wiley, 2003-02-11) Gandour, Jack; Wong, Donald; Dzemidzic, Mario; Lowe, Mark; Tong, Yunxia; Li, Xiaojian; Anatomy and Cell Biology, School of MedicineConflicting data from neurobehavioral studies of the perception of intonation (linguistic) and emotion (affective) in spoken language highlight the need to further examine how functional attributes of prosodic stimuli are related to hemispheric differences in processing capacity. Because of similarities in their acoustic profiles, intonation and emotion permit us to assess to what extent hemispheric lateralization of speech prosody depends on functional instead of acoustical properties. To examine how the brain processes linguistic and affective prosody, an fMRI study was conducted using Chinese, a tone language in which both intonation and emotion may be signaled prosodically, in addition to lexical tones. Ten Chinese and 10 English subjects were asked to perform discrimination judgments of intonation (I: statement, question) and emotion (E: happy, angry, sad) presented in semantically neutral Chinese sentences. A baseline task required passive listening to the same speech stimuli (S). In direct between‐group comparisons, the Chinese group showed left‐sided frontoparietal activation for both intonation (I vs. S) and emotion (E vs. S) relative to baseline. When comparing intonation relative to emotion (I vs. E), the Chinese group demonstrated prefrontal activation bilaterally; parietal activation in the left hemisphere only. The reverse comparison (E vs. I), on the other hand, revealed that activation occurred in anterior and posterior prefrontal regions of the right hemisphere only. These findings show that some aspects of perceptual processing of emotion are dissociable from intonation, and, moreover, that they are mediated by the right hemisphere.Item Factors Affecting Speech Discrimination in Children with Cochlear Implants: Evidence from Early-Implanted Infants(American Academy of Audiology, 2016-06) Phan, Jennifer; Houston, Derek M.; Ruffin, Chad; Ting, Jonathan; Holt, Rachael Frush; Otolaryngology -- Head and Neck Surgery, School of MedicineBackground To learn words and acquire language, children must be able to discriminate and correctly perceive phonemes. Although there has been much research on the general language outcomes of children with cochlear implants (CIs), little is known about the development of speech perception with regard to specific speech processes, such as speech discrimination. Purpose The purpose of this study was to investigate the development of speech discrimination in infants with CIs and identify factors that might correlate with speech discrimination skills. Research Design Using a Hybrid Visual Habituation procedure, we tested infants with CIs on their ability to discriminate the vowel contrast /i/-/u/. We also gathered demographic and audiological information about each infant. Study Sample Children who had received CIs before 2 yr of age served as participants. We tested the children at two post cochlear implantation intervals: 2–4 weeks post CI stimulation (N = 17) and 6–9 mo post CI stimulation (N = 10). Data Collection and Analysis The infants’ mean looking times during the novel versus old trials of the experiment were measured. A linear regression model was used to evaluate the relationship between the normalized looking time difference and the following variables: chronological age, age at CI stimulation, gender, communication mode, and best unaided pure-tone average. Results We found that the best unaided pure-tone average predicted speech discrimination at the early interval. In contrast to some previous speech perception studies that included children implanted before 3 yr of age, age at CI stimulation did not predict speech discrimination performance. Conclusions The results suggest that residual acoustic hearing before implantation might facilitate speech discrimination during the early period post cochlear implantation; with more hearing experience, communication mode might have a greater influence on the ability to discriminate speech. This and other studies on age at cochlear implantation suggest that earlier implantation might not have as large an effect on speech perception as it does on other language skills.Item Neural basis of first and second language processing of sentence-level linguistic prosody(Wiley, 2006-05-22) Gandour, Jackson; Tong, Yunxia; Talavage, Thomas; Wong, Donald; Dzemidzic, Mario; Xu, Yisheng; Li, Xiaojian; Lowe, Mark; Anatomy and Cell Biology, School of MedicineA fundamental question in multilingualism is whether the neural substrates are shared or segregated for the two or more languages spoken by polyglots. This study employs functional MRI to investigate the neural substrates underlying the perception of two sentence‐level prosodic phenomena that occur in both Mandarin Chinese (L1) and English (L2): sentence focus (sentence‐initial vs. ‐final position of contrastive stress) and sentence type (declarative vs. interrogative modality). Late‐onset, medium proficiency Chinese‐English bilinguals were asked to selectively attend to either sentence focus or sentence type in paired three‐word sentences in both L1 and L2 and make speeded‐response discrimination judgments. L1 and L2 elicited highly overlapping activations in frontal, temporal, and parietal lobes. Furthermore, region of interest analyses revealed that for both languages the sentence focus task elicited a leftward asymmetry in the supramarginal gyrus; both tasks elicited a rightward asymmetry in the mid‐portion of the middle frontal gyrus. A direct comparison between L1 and L2 did not show any difference in brain activation in the sentence type task. In the sentence focus task, however, greater activation for L2 than L1 occurred in the bilateral anterior insula and superior frontal sulcus. The sentence focus task also elicited a leftward asymmetry in the posterior middle temporal gyrus for L1 only. Differential activation patterns are attributed primarily to disparities between L1 and L2 in the phonetic manifestation of sentence focus. Such phonetic divergences lead to increased computational demands for processing L2. These findings support the view that L1 and L2 are mediated by a unitary neural system despite late age of acquisition, although additional neural resources may be required in task‐specific circumstances for unequal bilinguals.Item Neural correlates of segmental and tonal information in speech perception(Wiley, 2003-10-27) Gandour, Jack; Xu, Yisheng; Wong, Donald; Dzemidzic, Mario; Lowe, Mark; Li, Xiaojian; Tong, Yunxia; Anatomy and Cell Biology, School of MedicineThe Chinese language provides an optimal window for investigating both segmental and suprasegmental units. The aim of this cross‐linguistic fMRI study is to elucidate neural mechanisms involved in extraction of Chinese consonants, rhymes, and tones from syllable pairs that are distinguished by only one phonetic feature (minimal) vs. those that are distinguished by two or more phonetic features (non‐minimal). Triplets of Chinese monosyllables were constructed for three tasks comparing consonants, rhymes, and tones. Each triplet consisted of two target syllables with an intervening distracter. Ten Chinese and English subjects were asked to selectively attend to targeted sub‐syllabic components and make same‐different judgments. Direct between‐group comparisons in both minimal and non‐minimal pairs reveal increased activation for the Chinese group in predominantly left‐sided frontal, parietal, and temporal regions. Within‐group comparisons of non‐minimal and minimal pairs show that frontal and parietal activity varies for each sub‐syllabic component. In the frontal lobe, the Chinese group shows bilateral activation of the anterior middle frontal gyrus (MFG) for rhymes and tones only. Within‐group comparisons of consonants, rhymes, and tones show that rhymes induce greater activation in the left posterior MFG for the Chinese group when compared to consonants and tones in non‐minimal pairs. These findings collectively support the notion of a widely distributed cortical network underlying different aspects of phonological processing. This neural network is sensitive to the phonological structure of a listener's native language.