- Browse by Date
Department of Otolaryngology—Head and Neck Surgery Works
Permanent URI for this collection
Browse
Browsing Department of Otolaryngology—Head and Neck Surgery Works by Issue Date
Now showing 1 - 10 of 272
Results Per Page
Sort Options
Item Lexical Effects on Spoken Word Recognition by Pediatric Cochlear Implant Users(Wolters Kluwer, 1995) Kirk, Karen Iler; Pisoni, David B.; Osberger, Mary Joe; Otolaryngology -- Head and Neck Surgery, School of MedicineObjective: The purposes of this study were 1) to examine the effect of lexical characteristics on the spoken word recognition performance of children who use a multichannel cochlear implant (CI), and 2) to compare their performance on lexically controlled word lists with their performance on a traditional test of word recognition, the PB-K. Design: In two different experiments, 14 to 19 pediatric CI users who demonstrated at least some open-set speech recognition served as subjects. Based on computational analyses, word lists were constructed to allow systematic examination of the effects of word frequency, lexical density (i.e., the number of phonemically similar words, or neighbors), and word length. The subjects' performance on these new tests and the PB-K also was compared. Results: The percentage of words correctly identified was significantly higher for lexically "easy" words (high frequency words with few neighbors) than for "hard" words (low frequency words with many neighbors), but there was no lexical effect on phoneme recognition scores. Word recognition performance was consistently higher on the lexically controlled lists than on the PB-K. In addition, word recognition was better for multisyllabic than for momosyllabic stimuli. Conclusions: These results demonstrate that pediatric cochlear implant users are sensitive to the acoustic-phonetic similarities among words, that they organize words into similarity neighborhoods in long-term memory, and they use this structural information in recognizing isolated words. The results further suggest that the PB-K underestimates these subjects' spoken words recognition.Item New directions for assessing speech perception in persons with sensory aids(Sage, 1995) Kirk, K. I.; Pisoni, D. B.; Sommers, M. S.; Young, M.; Evanson, C.; Otolaryngology -- Head and Neck Surgery, School of MedicineThis study examined the influence of stimulus variability and lexical difficulty on the speech perception performance of adults who used either multichannel cochlear implants or conventional hearing aids. The effects of stimulus variability were examined by comparing word identification in single-talker versus multiple-talker conditions. Lexical effects were assessed by comparing recognition of "easy" words (ie, words that occur frequently and have few phonemically similar words, or neighbors) with "hard" words (ie, words with the opposite lexical characteristics). Word recognition performance was assessed in either closed- or open-set response formats. The results demonstrated that both stimulus variability and lexical difficulty influenced word recognition performance. Identification scores were poorer in the multiple-talker than in the single-talker conditions. Also, scores for lexically "easy" items were better than those for "hard" items. The effects of stimulus variability were not evident when a closed-set response format was employed.Item Effects of stimulus variability on speech perception in listeners with hearing impairment(ASHA, 1997) Kirk, Karen Iler; Pisoni, David B.; Miyamoto, R. Christopher; Otolaryngology -- Head and Neck Surgery, School of MedicineTraditional word-recognition tests typically use phonetically balanced (PB) word lists produced by one talker at one speaking rate. Intelligibility measures based on these tests may not adequately evaluate the perceptual processes used to perceive speech under more natural listening conditions involving many sources of stimulus variability. The purpose of this study was to examine the influence of stimulus variability and lexical difficulty on the speech-perception abilities of 17 adults with mild-to-moderate hearing loss. The effects of stimulus variability were studied by comparing word-identification performance in single-talker versus multiple-talker conditions and at different speaking rates. Lexical difficulty was assessed by comparing recognition of "easy" words (i.e., words that occur frequently and have few phonemically similar neighbors) with "hard" words (i.e., words that occur infrequently and have many similar neighbors). Subjects also completed a 20-item questionnaire to rate their speech understanding abilities in daily listening situations. Both sources of stimulus variability produced significant effects on speech intelligibility. Identification scores were poorer in the multiple-talker condition than in the single-talker condition, and word-recognition performance decreased as speaking rate increased. Lexical effects on speech intelligibility were also observed. Word-recognition performance was significantly higher for lexically easy words than lexically hard words. Finally, word-recognition performance was correlated with scores on the self-report questionnaire rating speech understanding under natural listening conditions. The pattern of results suggest that perceptually robust speech-discrimination tests are able to assess several underlying aspects of speech perception in the laboratory and clinic that appear to generalize to conditions encountered in natural listening situations where the listener is faced with many different sources of stimulus variability. That is, word-recognition performance measured under conditions where the talker varied from trial to trial was better correlated with self-reports of listening ability than was performance in a single-talker condition where variability was constrained.Item Some Considerations in Evaluating Spoken Word Recognition by Normal-Hearing, Noise-Masked Normal-Hearing, and Cochlear Implant Listeners. I: The Effects of Response Format(Wolters Kluwer, 1997) Sommers, Mitchell S.; Kirk, Karen Iler; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of MedicineObjective: The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. Design: The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Results: Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. Conclusions: These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.Item Recognizing Spoken Words: The Neighborhood Activation Model(Wolters Kluwer, 1998) Luce, Paul A.; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of MedicineObjective: A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design: Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results: The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults.Item AUDIOVISUAL INTEGRATION OF SPEECH BY CHILDREN AND ADULTS WITH COCHEAR IMPLANTS(Institute of Electrical and Electronics Engineers, 2002) Kirk, Karen Iler; Pisoni, David B.; Lachs, Lorin; Department of Otolaryngology--Head & Neck Surgery, School of MedicineThe present study examined how prelingually deafened children and postlingually deafened adults with cochlear implants (CIs) combine visual speech information with auditory cues. Performance was assessed under auditory-alone (A), visual- alone (V), and combined audiovisual (AV) presentation formats. A measure of visual enhancement, RA, was used to assess the gain in performance provided in the AV condition relative to the maximum possible performance in the auditory-alone format. Word recogniton was highest for AV presentation followed by A and V, respectively. Children who received more visual enhancement also produced more intelligible speech. Adults with CIs made better use of visual information in more difficult listening conditions (e.g., when mutiple talkers or phonemically similar words were used). The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.Item General intelligence and modality-specific differences in performance: a response to Schellenberg (2008)(Empirical Musicology Review, 2009-01) Tierney, Adam T.; Bergeson, Tonya R.; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of MedicineTierney et al. (2008) reported that musicians performed better on an auditory sequence memory task when compared to non-musicians, but the two groups did not differ in performance on a sequential visuo-spatial memory task. Schellenberg (2008) claims that these results can be attributed entirely to differences in IQ. This explanation, however, cannot account for the fact that the musicians' advantage was modality-specific.Item Effects of congenital hearing loss and cochlear implantation on audiovisual speech perception in infants and children(IOS Press, 2010) Bergeson, Tonya R.; Houston, Derek M.; Miyamoto, Richard T.; Otolaryngology -- Head and Neck Surgery, School of MedicinePurpose Cochlear implantation has recently become available as an intervention strategy for young children with profound hearing impairment. In fact, infants as young as 6 months are now receiving cochlear implants (CIs), and even younger infants are being fitted with hearing aids (HAs). Because early audiovisual experience may be important for normal development of speech perception, it is important to investigate the effects of a period of auditory deprivation and amplification type on multimodal perceptual processes of infants and children. The purpose of this study was to investigate audiovisual perception skills in normal-hearing (NH) infants and children and deaf infants and children with CIs and HAs of similar chronological ages. Methods We used an Intermodal Preferential Looking Paradigm to present the same woman’s face articulating two words (“judge” and “back”) in temporal synchrony on two sides of a TV monitor, along with an auditory presentation of one of the words. Results The results showed that NH infants and children spontaneously matched auditory and visual information in spoken words; deaf infants and children with HAs did not integrate the audiovisual information; and deaf infants and children with CIs initially did not initially integrate the audiovisual information but gradually matched the auditory and visual information in spoken words. Conclusions These results suggest that a period of auditory deprivation affects multimodal perceptual processes that may begin to develop normally after several months of auditory experience.Item An analysis of hearing aid fittings in adults using cochlear implants and contralateral hearing aids(Wiley, 2010-12) Harris, Michael S.; Hay-McCutcheon, Marcia; Otolaryngology -- Head and Neck Surgery, School of MedicineOBJECTIVES/HYPOTHESIS: The objective of this study was to assess the appropriateness of hearing aid fittings within a sample of adult cochlear implant recipients who use a hearing aid in the contralateral ear (i.e., bimodal stimulation). METHODS: The hearing aid gain was measured using real ear testing for 14 postlingually deaf English-speaking adults who use a cochlear implant in the contralateral ear. Unaided and aided audiometric testing assessed the degree of functional gain derived from hearing aid use. RESULTS: On average, the target to actual output level difference was within 10 dB only at frequencies of 750 Hz and 1,000 Hz. Only 1 of the 14 study participants had a hearing aid for which the majority of the tested frequencies were within 10 dB of the target gain. In addition, a greater amount of functional gain (i.e., the increase in unaided behavioral thresholds after amplification) was provided for lower frequencies than higher frequencies. CONCLUSIONS: Hearing aid settings in our sample were suboptimal and may be regarded as a contributing factor to the variability in bimodal benefit. Refining hearing aid fitting strategies tailored to the needs of the concurrent cochlear implant and hearing aid user is recommended.Item Long-Term Speech and Language Outcomes in Prelingually Deaf Children, Adolescents and Young Adults Who Received Cochlear Implants in Childhood(Karger, 2013) Ruffin, Chad V.; Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of MedicineThis study investigated long-term speech and language outcomes in 51 prelingually deaf children, adolescents and young adults who received cochlear implants (CIs) prior to 7 years of age and had used their implants for at least 7 years. Average speech perception scores were similar to those found in prior research with other samples of experienced CI users. Mean language test scores were lower than norm-referenced scores from nationally representative normal-hearing, typically developing samples, although a majority of the CI users scored within 1 standard deviation of the normative mean or higher on the Peabody Picture Vocabulary Test, Fourth Edition (63%), and the Clinical Evaluation of Language Fundamentals, Fourth Edition (69%). Speech perception scores were negatively associated with a meningitic etiology of hearing loss, older age at implantation, poorer preimplant unaided pure-tone average thresholds, lower family income and the use of 'total communication'. Subjects who had used CIs for 15 years or more were more likely to have these characteristics and were more likely to score lower on measures of speech perception compared to those who had used CIs for 14 years or less. The aggregation of these risk factors in the >15 years of CI use subgroup accounts for their lower speech perception scores and may stem from more conservative CI candidacy criteria in use at the beginning of pediatric cochlear implantation.