- Browse by Subject
Browsing by Subject "Speech Perception"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item High-Variability Sentence Recognition in Long-Term Cochlear Implant Users: Associations With Rapid Phonological Coding and Executive Functioning(Wolters Kluwer, 2019-10-01) Smith, Gretchen N. L.; Pisoni, David B.; Kronenberger, William G.; Otolaryngology -- Head and Neck Surgery, School of MedicineObjective: The objective of the present study was to determine whether long-term cochlear implant (CI) users would show greater variability in rapid phonological coding skills and greater reliance on slow-effortful compensatory executive functioning (EF) skills than normal hearing (NH) peers on perceptually challenging high-variability sentence recognition tasks. We tested the following three hypotheses: First, CI users would show lower scores on sentence recognition tests involving high speaker and dialect variability than NH controls, even after adjusting for poorer sentence recognition performance by CI users on a conventional low-variability sentence recognition test. Second, variability in fast-automatic rapid phonological coding skills would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. Third, compensatory EF strategies would be more strongly associated with performance on high-variability sentence recognition tasks for CI users than NH peers. Design: Two groups of children, adolescents, and young adults aged 9 to 29 years participated in this cross-sectional study: 49 long-term CI users (≥ 7 years) and 56 NH controls. All participants were tested on measures of rapid phonological coding (Children’s Test of Nonword Repetition), conventional sentence recognition (Harvard Sentence Recognition Test), and two novel high-variability sentence recognition tests that varied the indexical attributes of speech [Perceptually Robust English Sentence Test Open-set test (PRESTO) and PRESTO Foreign-Accented English test]. Measures of EF included verbal working memory (WM), spatial WM, controlled cognitive fluency, and inhibition-concentration. Results: CI users scored lower than NH peers on both tests of high-variability sentence recognition even after conventional sentence recognition skills were statistically controlled. Correlations between rapid phonological coding and high-variability sentence recognition scores were stronger for the CI sample than for the NH sample even after basic sentence perception skills were statistically controlled. Scatterplots revealed different ranges and slopes for the relationship between rapid phonological coding skills and high-variability sentence recognition performance in CI users and NH peers. Although no statistically significant correlations between EF strategies and sentence recognition were found in the CI or NH sample after use of a conservative Bonferroni-type correction, medium to high effect sizes for correlations between verbal WM and sentence recognition in the CI sample suggest that further investigation of this relationship is needed. Conclusions: These findings provide converging support for neurocognitive models that propose two channels for speech-language processing: a fast-automatic channel that predominates whenever possible and a compensatory slow-effortful processing channel that is activated during perceptually-challenging speech processing tasks that are not fully managed by the fast-automatic channel (Ease of Language Understanding, Framework for Understanding Effortful Listening, and Auditory Neurocognitive Model). CI users showed significantly poorer performance on measures of high-variability sentence recognition than NH peers, even after simple sentence recognition was controlled. Nonword repetition scores showed almost no overlap between CI and NH samples, and correlations between nonword repetition scores and high-variability sentence recognition were consistent with greater reliance on engagement of fast-automatic phonological coding for high-variability sentence recognition in the CI sample than in the NH sample. Further investigation of the verbal WM-sentence recognition relationship in CI users is recommended. Assessment of fast-automatic phonological processing and slow-effortful EF skills may provide a better understanding of speech perception outcomes in CI users in the clinical setting.Item Influence of early linguistic experience on regional dialect categorization by an adult cochlear implant user: a case study(Ovid Technologies (Wolters Kluwer) - Lippincott Williams & Wilkins, 2014-05) Tamati, Terrin N.; Gilbert, Jaimie L.; Pisoni, David B.; Department of Otolaryngology--Head & Neck Surgery, IU School of MedicineTo investigate the ability of a cochlear implant user to categorize talkers by region of origin and examine the influence of prior linguistic experience on the perception of regional dialect variation. A postlingually deafened adult cochlear implant user from the Southern region of the United States completed a six-alternative forced-choice dialect categorization task. The cochlear implant user was most accurate at categorizing unfamiliar talkers from his own region and another familiar dialect region, and least accurate at categorizing talkers from less familiar regions. Although the dialect-specific information made available by a cochlear implant may be degraded compared with information available to normal-hearing listeners, this experienced cochlear implant user was able to reliably categorize unfamiliar talkers by region of origin. The participant made use of dialect-specific acoustic-phonetic information in the speech signal and previously stored knowledge of regional dialect differences from early exposure before implantation despite an early hearing loss.Item Verbal Learning and Memory After Cochlear Implantation in Postlingually Deaf Adults: Some New Findings with the CVLT-II(Wolters Kluwer, 2018) Pisoni, David B.; Broadstock, Arthur; Wucinich, Taylor; Safdar, Natalie; Miller, Kelly; Hernandez, Luis R.; Vasil, Kara; Boyce, Lauren; Davies, Alexandra; Harris, Michael S.; Castellanos, Irina; Xu, Huiping; Kronenberger, William G.; Moberly, Aaron C.; Biostatistics, IU School of MedicineOBJECTIVES: Despite the importance of verbal learning and memory in speech and language processing, this domain of cognitive functioning has been virtually ignored in clinical studies of hearing loss and cochlear implants in both adults and children. In this article, we report the results of two studies that used a newly developed visually based version of the California Verbal Learning Test-Second Edition (CVLT-II), a well-known normed neuropsychological measure of verbal learning and memory. DESIGN: The first study established the validity and feasibility of a computer-controlled visual version of the CVLT-II, which eliminates the effects of audibility of spoken stimuli, in groups of young normal-hearing and older normal-hearing (ONH) adults. A second study was then carried out using the visual CVLT-II format with a group of older postlingually deaf experienced cochlear implant (ECI) users (N = 25) and a group of ONH controls (N = 25) who were matched to ECI users for age, socioeconomic status, and nonverbal IQ. In addition to the visual CVLT-II, subjects provided data on demographics, hearing history, nonverbal IQ, reading fluency, vocabulary, and short-term memory span for visually presented digits. ECI participants were also tested for speech recognition in quiet. RESULTS: The ECI and ONH groups did not differ on most measures of verbal learning and memory obtained with the visual CVLT-II, but deficits were identified in ECI participants that were related to recency recall, the buildup of proactive interference, and retrieval-induced forgetting. Within the ECI group, nonverbal fluid IQ, reading fluency, and resistance to the buildup of proactive interference from the CVLT-II consistently predicted better speech recognition outcomes. CONCLUSIONS: Results from this study suggest that several underlying foundational neurocognitive abilities are related to core speech perception outcomes after implantation in older adults. Implications of these findings for explaining individual differences and variability and predicting speech recognition outcomes after implantation are discussed.