- Browse by Author
Browsing by Author "Kirk, Karen Iler"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item AUDIOVISUAL INTEGRATION OF SPEECH BY CHILDREN AND ADULTS WITH COCHEAR IMPLANTS(Institute of Electrical and Electronics Engineers, 2002) Kirk, Karen Iler; Pisoni, David B.; Lachs, Lorin; Department of Otolaryngology--Head & Neck Surgery, School of MedicineThe present study examined how prelingually deafened children and postlingually deafened adults with cochlear implants (CIs) combine visual speech information with auditory cues. Performance was assessed under auditory-alone (A), visual- alone (V), and combined audiovisual (AV) presentation formats. A measure of visual enhancement, RA, was used to assess the gain in performance provided in the AV condition relative to the maximum possible performance in the auditory-alone format. Word recogniton was highest for AV presentation followed by A and V, respectively. Children who received more visual enhancement also produced more intelligible speech. Adults with CIs made better use of visual information in more difficult listening conditions (e.g., when mutiple talkers or phonemically similar words were used). The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.Item Effects of stimulus variability on speech perception in listeners with hearing impairment(ASHA, 1997) Kirk, Karen Iler; Pisoni, David B.; Miyamoto, R. Christopher; Otolaryngology -- Head and Neck Surgery, School of MedicineTraditional word-recognition tests typically use phonetically balanced (PB) word lists produced by one talker at one speaking rate. Intelligibility measures based on these tests may not adequately evaluate the perceptual processes used to perceive speech under more natural listening conditions involving many sources of stimulus variability. The purpose of this study was to examine the influence of stimulus variability and lexical difficulty on the speech-perception abilities of 17 adults with mild-to-moderate hearing loss. The effects of stimulus variability were studied by comparing word-identification performance in single-talker versus multiple-talker conditions and at different speaking rates. Lexical difficulty was assessed by comparing recognition of "easy" words (i.e., words that occur frequently and have few phonemically similar neighbors) with "hard" words (i.e., words that occur infrequently and have many similar neighbors). Subjects also completed a 20-item questionnaire to rate their speech understanding abilities in daily listening situations. Both sources of stimulus variability produced significant effects on speech intelligibility. Identification scores were poorer in the multiple-talker condition than in the single-talker condition, and word-recognition performance decreased as speaking rate increased. Lexical effects on speech intelligibility were also observed. Word-recognition performance was significantly higher for lexically easy words than lexically hard words. Finally, word-recognition performance was correlated with scores on the self-report questionnaire rating speech understanding under natural listening conditions. The pattern of results suggest that perceptually robust speech-discrimination tests are able to assess several underlying aspects of speech perception in the laboratory and clinic that appear to generalize to conditions encountered in natural listening situations where the listener is faced with many different sources of stimulus variability. That is, word-recognition performance measured under conditions where the talker varied from trial to trial was better correlated with self-reports of listening ability than was performance in a single-talker condition where variability was constrained.Item Lexical Effects on Spoken Word Recognition by Pediatric Cochlear Implant Users(Wolters Kluwer, 1995) Kirk, Karen Iler; Pisoni, David B.; Osberger, Mary Joe; Otolaryngology -- Head and Neck Surgery, School of MedicineObjective: The purposes of this study were 1) to examine the effect of lexical characteristics on the spoken word recognition performance of children who use a multichannel cochlear implant (CI), and 2) to compare their performance on lexically controlled word lists with their performance on a traditional test of word recognition, the PB-K. Design: In two different experiments, 14 to 19 pediatric CI users who demonstrated at least some open-set speech recognition served as subjects. Based on computational analyses, word lists were constructed to allow systematic examination of the effects of word frequency, lexical density (i.e., the number of phonemically similar words, or neighbors), and word length. The subjects' performance on these new tests and the PB-K also was compared. Results: The percentage of words correctly identified was significantly higher for lexically "easy" words (high frequency words with few neighbors) than for "hard" words (low frequency words with many neighbors), but there was no lexical effect on phoneme recognition scores. Word recognition performance was consistently higher on the lexically controlled lists than on the PB-K. In addition, word recognition was better for multisyllabic than for momosyllabic stimuli. Conclusions: These results demonstrate that pediatric cochlear implant users are sensitive to the acoustic-phonetic similarities among words, that they organize words into similarity neighborhoods in long-term memory, and they use this structural information in recognizing isolated words. The results further suggest that the PB-K underestimates these subjects' spoken words recognition.Item Some Considerations in Evaluating Spoken Word Recognition by Normal-Hearing, Noise-Masked Normal-Hearing, and Cochlear Implant Listeners. I: The Effects of Response Format(Wolters Kluwer, 1997) Sommers, Mitchell S.; Kirk, Karen Iler; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of MedicineObjective: The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. Design: The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Results: Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. Conclusions: These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.