ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Phonetics"

Now showing 1 - 5 of 5
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Dataglove measurement of joint angles in sign language handshapes
    (John Benjamins, 2012) Eccarius, Petra; Bour, Rebecca; Scheidt, Robert A.; Medicine, School of Medicine
    In sign language research, we understand little about articulatory factors involved in shaping phonemic boundaries or the amount (and articulatory nature) of acceptable phonetic variation between handshapes. To date, there exists no comprehensive analysis of handshape based on the quantitative measurement of joint angles during sign production. The purpose of our work is to develop a methodology for collecting and visualizing quantitative handshape data in an attempt to better understand how handshapes are produced at a phonetic level. In this pursuit, we seek to quantify the flexion and abduction angles of the finger joints using a commercial data glove (CyberGlove; Immersion Inc.). We present calibration procedures used to convert raw glove signals into joint angles. We then implement those procedures and evaluate their ability to accurately predict joint angle. Finally, we provide examples of how our recording techniques might inform current research questions.
  • Loading...
    Thumbnail Image
    Item
    Lexical Effects on Spoken Word Recognition by Pediatric Cochlear Implant Users
    (Wolters Kluwer, 1995) Kirk, Karen Iler; Pisoni, David B.; Osberger, Mary Joe; Otolaryngology -- Head and Neck Surgery, School of Medicine
    Objective: The purposes of this study were 1) to examine the effect of lexical characteristics on the spoken word recognition performance of children who use a multichannel cochlear implant (CI), and 2) to compare their performance on lexically controlled word lists with their performance on a traditional test of word recognition, the PB-K. Design: In two different experiments, 14 to 19 pediatric CI users who demonstrated at least some open-set speech recognition served as subjects. Based on computational analyses, word lists were constructed to allow systematic examination of the effects of word frequency, lexical density (i.e., the number of phonemically similar words, or neighbors), and word length. The subjects' performance on these new tests and the PB-K also was compared. Results: The percentage of words correctly identified was significantly higher for lexically "easy" words (high frequency words with few neighbors) than for "hard" words (low frequency words with many neighbors), but there was no lexical effect on phoneme recognition scores. Word recognition performance was consistently higher on the lexically controlled lists than on the PB-K. In addition, word recognition was better for multisyllabic than for momosyllabic stimuli. Conclusions: These results demonstrate that pediatric cochlear implant users are sensitive to the acoustic-phonetic similarities among words, that they organize words into similarity neighborhoods in long-term memory, and they use this structural information in recognizing isolated words. The results further suggest that the PB-K underestimates these subjects' spoken words recognition.
  • Loading...
    Thumbnail Image
    Item
    The Perception of Regional Dialects and Foreign Accents by Cochlear Implant Users
    (American Speech-Language-Hearing Association, 2021-02-17) Tamati, Terrin N.; Pisoni, David B.; Moberly, Aaron C.; Otolaryngology -- Head and Neck Surgery, School of Medicine
    Purpose: This preliminary research examined (a) the perception of two common sources of indexical variability in speech—regional dialects and foreign accents, and (b) the relation between indexical processing and sentence recognition among prelingually deaf, long-term cochlear implant (CI) users and normal-hearing (NH) peers. Method: Forty-three prelingually deaf adolescent and adult CI users and 44 NH peers completed a regional dialect categorization task, which consisted of identifying the region of origin of an unfamiliar talker from six dialect regions of the United States. They also completed an intelligibility rating task, which consisted of rating the intelligibility of short sentences produced by native and nonnative (foreign-accented) speakers of American English on a scale from 1 (not intelligible at all) to 7 (very intelligible). Individual performance was compared to demographic factors and sentence recognition scores. Results: Both CI and NH groups demonstrated difficulty with regional dialect categorization, but NH listeners significantly outperformed the CI users. In the intelligibility rating task, both CI and NH listeners rated foreign-accented sentences as less intelligible than native sentences; however, CI users perceived smaller differences in intelligibility between native and foreign-accented sentences. Sensitivity to accent differences was related to sentence recognition accuracy in CI users. Conclusions: Prelingually deaf, long-term CI users are sensitive to accent variability in speech, but less so than NH peers. Additionally, individual differences in CI users' sensitivity to indexical variability was related to sentence recognition abilities, suggesting a common source of difficulty in the perception and encoding of fine acoustic–phonetic details in speech.
  • Loading...
    Thumbnail Image
    Item
    Recognizing Spoken Words: The Neighborhood Activation Model
    (Wolters Kluwer, 1998) Luce, Paul A.; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of Medicine
    Objective: A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design: Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results: The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults.
  • Loading...
    Thumbnail Image
    Item
    Some Considerations in Evaluating Spoken Word Recognition by Normal-Hearing, Noise-Masked Normal-Hearing, and Cochlear Implant Listeners. I: The Effects of Response Format
    (Wolters Kluwer, 1997) Sommers, Mitchell S.; Kirk, Karen Iler; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of Medicine
    Objective: The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. Design: The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Results: Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. Conclusions: These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University