- Browse by Subject
Browsing by Subject "Speech recognition"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Contribution of Verbal Learning & Memory and Spectro-Temporal Discrimination to Speech Recognition in Cochlear Implant Users(Wiley, 2023) Harris, Michael S.; Hamel, Benjamin L.; Wichert, Kristin; Kozlowski, Kristin; Mleziva, Sarah; Ray, Christin; Pisoni, David B.; Kronenberger, William G.; Moberly, Aaron C.; Psychiatry, School of MedicineObjectives: Existing cochlear implant (CI) outcomes research demonstrates a high degree of variability in device effectiveness among experienced CI users. Increasing evidence suggests that verbal learning and memory (VL&M) may have an influence on speech recognition with CIs. This study examined the relations in CI users between visual measures of VL&M and speech recognition in a series of models that also incorporated spectro-temporal discrimination. Predictions were that (1) speech recognition would be associated with VL&M abilities and (2) VL&M would contribute to speech recognition outcomes above and beyond spectro-temporal discrimination in multivariable models of speech recognition. Methods: This cross-sectional study included 30 adult postlingually deaf experienced CI users who completed a nonauditory visual version of the California Verbal Learning Test-Second Edition (v-CVLT-II) to assess VL&M, and the Spectral-Temporally Modulated Ripple Test (SMRT), an auditory measure of spectro-temporal processing. Participants also completed a battery of word and sentence recognition tasks. Results: CI users showed significant correlations between some v-CVLT-II measures (short-delay free- and cued-recall, retroactive interference, and "subjective" organizational recall strategies) and speech recognition measures. Performance on the SMRT was correlated with all speech recognition measures. Hierarchical multivariable linear regression analyses showed that SMRT performance accounted for a significant degree of speech recognition outcome variance. Moreover, for all speech recognition measures, VL&M scores contributed independently in addition to SMRT. Conclusion: Measures of spectro-temporal discrimination and VL&M were associated with speech recognition in CI users. After accounting for spectro-temporal discrimination, VL&M contributed independently to performance on measures of speech recognition for words and sentences produced by single and multiple talkers.Item List Equivalency of PRESTO for the Evaluation of Speech Recognition(American Academy of Audiology, 2015-06) Faulkner, Kathleen F.; Tamati, Terrin N.; Gilbert, Jaimie L.; Pisoni, David B.; Otolaryngology -- Head and Neck Surgery, School of MedicineBACKGROUND: There is a pressing clinical need for the development of ecologically valid and robust assessment measures of speech recognition. Perceptually Robust English Sentence Test Open-set (PRESTO) is a new high-variability sentence recognition test that is sensitive to individual differences and was designed for use with several different clinical populations. PRESTO differs from other sentence recognition tests because the target sentences differ in talker, gender, and regional dialect. Increasing interest in using PRESTO as a clinical test of spoken word recognition dictates the need to establish equivalence across test lists. PURPOSE: The purpose of this study was to establish list equivalency of PRESTO for clinical use. RESEARCH DESIGN: PRESTO sentence lists were presented to three groups of normal-hearing listeners in noise (multitalker babble [MTB] at 0 dB signal-to-noise ratio) or under eight-channel cochlear implant simulation (CI-Sim). STUDY SAMPLE: Ninety-one young native speakers of English who were undergraduate students from the Indiana University community participated in this study. DATA COLLECTION AND ANALYSIS: Participants completed a sentence recognition task using different PRESTO sentence lists. They listened to sentences presented over headphones and typed in the words they heard on a computer. Keyword scoring was completed offline. Equivalency for sentence lists was determined based on the list intelligibility (mean keyword accuracy for each list compared with all other lists) and listener consistency (the relation between mean keyword accuracy on each list for each listener). RESULTS: Based on measures of list equivalency and listener consistency, ten PRESTO lists were found to be equivalent in the MTB condition, nine lists were equivalent in the CI-Sim condition, and six PRESTO lists were equivalent in both conditions. CONCLUSIONS: PRESTO is a valuable addition to the clinical toolbox for assessing sentence recognition across different populations. Because the test condition influenced the overall intelligibility of lists, researchers and clinicians should take the presentation conditions into consideration when selecting the best PRESTO lists for their research or clinical protocols.Item Non-native listeners' recognition of high-variability speech using PRESTO(Ingenta, 2014-10) Tamati, Terrin N.; Pisoni, David B.; Department of Otolaryngology -- Head & Neck Surgery, IU School of MedicineBACKGROUND: Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. PURPOSE: The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. RESEARCH DESIGN: Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. STUDY SAMPLE: Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. DATA COLLECTION AND ANALYSIS: Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function - Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. RESULTS: Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners' keyword recognition scores were also lower than native listeners' scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. CONCLUSIONS: High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life.Item Recognizing spoken words in semantically-anomalous sentences: Effects of executive control in early-implanted deaf children with cochlear implants(Taylor & Francis, 2021) Pisoni, David B.; Kronenberger, William G.; Otolaryngology -- Head and Neck Surgery, School of MedicineTo investigate differences in speech, language, and neurocognitive functioning in normal hearing (NH) children and deaf children with cochlear implants (CIs) using anomalous sentences. Anomalous sentences block the use of downstream predictive coding during speech recognition, allowing for investigation of rapid phonological coding and executive functioning. Methods: Extreme groups were extracted from samples of children with CIs and NH peers (ages 9 to 17) based on the 7 highest and 7 lowest scores on the Harvard-Anomalous sentence test (Harvard-A). The four groups were compared on measures of speech, language, and neurocognitive functioning. Results: The 7 highest-scoring CI users and the 7 lowest-scoring NH peers did not differ in Harvard-A scores but did differ significantly on measures of neurocognitive functioning. Compared to low-performing NH peers, high performing children with CIs had significantly lower nonword repetition scores but higher nonverbal IQ scores, greater verbal WM capacity, and excellent EF skills related to inhibition, shifting attention/mental flexibility and working memory updating. Discussion: High performing deaf children with CIs are able to compensate for their sensory deficits and weaknesses in automatic phonological coding of speech by engaging in a slow effortful mode of information processing involving inhibition, working memory and executive functioning.Item Silent speech recognition in EEG-based brain computer interface(2015) Ghane, Parisa; Li, Lingxi; Tovar, Andres; Christopher, Lauren Ann; King, BrianA Brain Computer Interface (BCI) is a hardware and software system that establishes direct communication between human brain and the environment. In a BCI system, brain messages pass through wires and external computers instead of the normal pathway of nerves and muscles. General work ow in all BCIs is to measure brain activities, process and then convert them into an output readable for a computer. The measurement of electrical activities in different parts of the brain is called electroencephalography (EEG). There are lots of sensor technologies with different number of electrodes to record brain activities along the scalp. Each of these electrodes captures a weighted sum of activities of all neurons in the area around that electrode. In order to establish a BCI system, it is needed to set a bunch of electrodes on scalp, and a tool to send the signals to a computer for training a system that can find the important information, extract them from the raw signal, and use them to recognize the user's intention. After all, a control signal should be generated based on the application. This thesis describes the step by step training and testing a BCI system that can be used for a person who has lost speaking skills through an accident or surgery, but still has healthy brain tissues. The goal is to establish an algorithm, which recognizes different vowels from EEG signals. It considers a bandpass filter to remove signals' noise and artifacts, periodogram for feature extraction, and Support Vector Machine (SVM) for classification.Item TMPRSS3 expression is limited in spiral ganglion neurons: implication for successful cochlear implantation(BMJ, 2022) Chen, Yuan-Siao; Cabrera, Ernesto; Tucker, Brady J.; Shin, Timothy J.; Moawad, Jasmine V.; Totten, Douglas J.; Booth, Kevin T.; Nelson, Rick F.; Otolaryngology -- Head and Neck Surgery, School of MedicineBackground: It is well established that biallelic mutations in transmembrane protease, serine 3 (TMPRSS3) cause hearing loss. Currently, there is controversy regarding the audiological outcomes after cochlear implantation (CI) for TMPRSS3-associated hearing loss. This controversy creates confusion among healthcare providers regarding the best treatment options for individuals with TMPRSS3-related hearing loss. Methods: A literature review was performed to identify all published cases of patients with TMPRSS3-associated hearing loss who received a CI. CI outcomes of this cohort were compared with published adult CI cohorts using postoperative consonant-nucleus-consonant (CNC) word performance. TMPRSS3 expression in mouse cochlea and human auditory nerves (HAN) was determined by using hybridisation chain reaction and single-cell RNA-sequencing analysis. Results: In aggregate, 27 patients (30 total CI ears) with TMPRSS3-associated hearing loss treated with CI, and 85% of patients reported favourable outcomes. Postoperative CNC word scores in patients with TMPRSS3-associated hearing loss were not significantly different than those seen in adult CI cohorts (8 studies). Robust Tmprss3 expression occurs throughout the mouse organ of Corti, the spindle and root cells of the lateral wall and faint staining within <5% of the HAN, representing type II spiral ganglion neurons. Adult HAN express negligible levels of TMPRSS3. Conclusion: The clinical features after CI and physiological expression of TMPRSS3 suggest against a major role of TMPRSS3 in auditory neurons.