- Browse by Subject
Browsing by Subject "Hearing"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Differential At-Risk Pediatric Outcomes of Parental Sensitivity Based on Hearing Status(American Speech-Language-Hearing Association, 2021) Jamsek, Izabela A.; Holt, Rachael Frush; Kronenberger, William G.; Pisoni, David B.; Psychiatry, School of MedicinePurpose: The aim of this study was to investigate the role of parental sensitivity in language and neurocognitive outcomes in children who are deaf and/or hard of hearing (DHH). Method: Sixty-two parent–child dyads of children with normal hearing (NH) and 64 of children who are DHH (3–8 years) completed parent and child measures of inhibitory control/executive functioning and child measures of sentence comprehension and vocabulary. The dyads also participated in a video-recorded, free-play interaction that was coded for parental sensitivity. Results: There was no evidence of associations between parental sensitivity and inhibitory control or receptive language in children with NH. In contrast, parental sensitivity was related to children's inhibitory control and all language measures in children who are DHH. Moreover, inhibitory control significantly mediated the association between parental sensitivity and child language on the Clinical Evaluation of Language Fundamentals–Fifth Edition Following Directions subscale (6–8 years)/Clinical Evaluation of Language Fundamentals Preschool–Second Edition Concepts and Following Directions subscale (3–5 years). Follow-up analyses comparing subgroups of children who used hearing aids (n = 29) or cochlear implants (CIs; n = 35) revealed similar correlational trends, with the exception that parental sensitivity showed little relation to inhibitory control in the group of CI users. Conclusions: Parental sensitivity is associated with at-risk language outcomes and disturbances in inhibitory control in young children who are DHH. Compared to children with NH, children who are DHH may be more sensitive to parental behaviors and their effects on emerging inhibitory control and spoken language. Specifically, inhibitory control, when scaffolded by positive parental behaviors, may be critically important for robust language development in children who are DHH.Item Functional Hearing Quality in Prelingually Deaf School-Age Children and Adolescents with Cochlear Implants(Taylor & Francis, 2021) Kronenberger, William G.; Bozell, Hannah; Henning, Shirley C.; Montgomery, Caitlin J.; Ditmars, Allison M.; Pisoni, David B.; Psychiatry, School of MedicineObjective: This study investigated differences in functional hearing quality between youth with cochlear implants (CIs) and normal hearing (NH) peers, as well as associations between functional hearing quality and audiological measures, speech perception, language and executive functioning (EF). Design: Youth with CIs and NH peers completed measures of audiological functioning, speech perception, language and EF. Parents completed the Quality of Hearing Scale (QHS), a questionnaire measure of functional hearing quality. Study sample: Participants were 43 prelingually-deaf, early-implanted, long-term CI users and 43 NH peers aged 7-17 years. Results: Compared to NH peers, youth with CIs showed poorer functional hearing quality on the QHS Speech, Localization, and Sounds subscales and more hearing effort on the QHS Effort subscale. QHS scores did not correlate significantly with audiological/hearing history measures but were significantly correlated with most speech perception, language and EF scores in the CI sample. In the NH sample, QHS scores were uncorrelated with speech perception and language and were inconsistently correlated with EF. Conclusions: The QHS is a valid measure of functional hearing quality that is distinct from office-based audiometric or hearing history measures. Functional hearing outcomes are associated with speech-language and EF outcomes in CI users.Item Great headphones blend physics, anatomy and psychology – but what you like to listen to is also important for choosing the right pair(The Conversation US, Inc., 2021-11-24) Hsu, Timothy; Music and Arts Technology, Herron School of Art and DesignItem Hearing Loss and Use of Medications for Anxiety and/or Depression in Testicular Cancer Survivors Treated with Cisplatin-Based Chemotherapy(2020-05) Ardeshirrouhanifard, Shirin; Song, Yiqing; Travis, Lois; Monahan, Patrick; Wessel, JenniferTesticular cancer is the most common solid tumor among young men. Although testicular cancer survivors (TCS) are expected to live for over 40 years after cancer diagnosis, they are at risk for chemotherapy adverse effects such as hearing loss (HL), tinnitus, and psychosocial effects. The aim of this study was to investigate factors associated with discrepancies between subjective and objective HL, factors associated with HL, and factors associated with the use of medications for anxiety/depression. TCS were enrolled in the Platinum Study. Sociodemographic characteristics, health behaviors, morbidities, and prescription medications were assessed though self-reporting using validated questionnaires. Bilateral pure-tone air conduction thresholds were collected at frequencies 0.25-12 kHz. To assess HL severity, hearing thresholds were classified according to American Speech-Language-Hearing Association criteria. Multivariable multinomial, ordinal, and binomial logistic regressions were used to test factors for association with discrepancy between subjective and objective HL, cisplatin-induced HL, and use of medications for anxiety/depression, respectively. Patients with HL at only extended high frequencies (10-12 kHz) could perceive hearing deficits; thus, it would be preferable for these frequencies to be included in audiometric assessments of cisplatin-treated adult-onset cancer survivors. Age, no noise exposure, and mixed/conductive HL were significantly associated with more underestimation of HL severity. Hearing aid use and education were significantly associated with less underestimation of HL severity. Having tinnitus was associated with more overestimation of HL severity. Age, cumulative cisplatin dose, and hypertension showed significant association with greater HL severity, whereas post-graduate education was associated with less HL severity. Factors associated with more use of medications for anxiety/depression were tinnitus, and peripheral sensory neuropathy, while being employed and engaging in physical activity were significantly associated with less use of anxiety/depression medications. The sole use of patient-reported measures of HL might not be well-suited to evaluate HL in cancer survivors; thus, the use of audiometry may complement patient-reported HL. In terms of modifiable risk factors of cisplatin-induced HL, healthcare providers should monitor patients’ blood pressure and manage hypertension appropriately. In addition, healthcare providers need to effectively manage tinnitus and peripheral neuropathy to improve treatment outcomes of anxiety and depression.Item Hearing, Perception, and Language in Clinical and Typical Populations(Office of the Vice Chancellor for Research, 2010-04-09) Miyamoto, Richard T.; Bergson, Tonya R.; Burns, Debra S.; Chin, Steven B.; Houston, Derek M.The IUPUI Signature Center for Advanced Studies in Hearing, Perception, and Language is a multidisciplinary, multidepartmental, multischool center dedicated to the integration of knowledge and methodologies from different disciplines to study speech perception and production, music perception and production, language, and cognition in clinical populations across the lifespan. Examples of ongoing research include the assessment of adult cochlear implant users’ perception of pitch; pediatric cochlear implant users’ speech intelligibility, prosody, and vocal music production; infants’ perception of auditory labels for visual objects; and breast cancer survivors’ perception of musical patterns following chemotherapy. In one study, we documented differences in hearing and music cognition between breast cancer survivors who received adjuvant cancer treatment and healthy age- and educationmatched controls. Participants were 29 female breast cancer survivors and 29 healthy controls. All participants received an audiometric test to assess hearing and The Montreal Battery for Evaluation of Amusia, which assesses such perceptual areas as melodic organization, temporal organization, and melodic memory. Results showed a moderate negative correlation between hearing and melodic organization scores across all subjects. For music cognition variables, effect-size analyses of melodic organization tasks (contour, intervals, tonality) suggested that healthy controls scored better than breast cancer survivors, although not significantly. The Center for Advanced Studies in Hearing, Perception, and Language continues to apply both standard and innovative analysis methodology to address cognitive issues of relevance to both clinical and typical populations.Item The murine catecholamine methyltransferase mTOMT is essential for mechanotransduction by cochlear hair cells(eLife Sciences Publications, 2017-05-15) Cunningham, Christopher L.; Wu, Zizhen; Jafari, Aria; Zhao, Bo; Schrode, Kat; Harkins-Perry, Sarah; Lauer, Amanda; Müller, Ulrich; Otolaryngology -- Head and Neck Surgery, School of MedicineHair cells of the cochlea are mechanosensors for the perception of sound. Mutations in the LRTOMT gene, which encodes a protein with homology to the catecholamine methyltransferase COMT that is linked to schizophrenia, cause deafness. Here, we show that Tomt/Comt2, the murine ortholog of LRTOMT, has an unexpected function in the regulation of mechanotransduction by hair cells. The role of mTOMT in hair cells is independent of mTOMT methyltransferase function and mCOMT cannot substitute for mTOMT function. Instead, mTOMT binds to putative components of the mechanotransduction channel in hair cells and is essential for the transport of some of these components into the mechanically sensitive stereocilia of hair cells. Our studies thus suggest functional diversification between mCOMT and mTOMT, where mTOMT is critical for the assembly of the mechanotransduction machinery of hair cells. Defects in this process are likely mechanistically linked to deafness caused by mutations in LRTOMT/Tomt.Item Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of )(Wolters Kluwer, 2017-05) Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.; Psychiatry, School of MedicineOBJECTIVES: Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. DESIGN: Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). RESULTS: Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. CONCLUSIONS: First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.Item Speed of Information Processing and Verbal Working Memory in Children and Adolescents With Cochlear Implants(Wolters Kluwer, 2023) Herran, Reid M.; Montgomery, Caitlin J.; Henning, Shirley C.; Herbert, Carolyn J.; Ditmars, Allison M.; Yates, Catherine J.; Pisoni, David B.; Kronenberger, William G.; Otolaryngology -- Head and Neck Surgery, School of MedicineBackground: Verbal working memory delays are found in many deaf children with cochlear implants compared with normal-hearing peers, but the factors contributing to these delays are not well understood. This study investigated differences between cochlear implant users and normal-hearing peers in memory scanning speed during a challenging verbal working memory task. To better understand variability in verbal working memory capacity within each sample, associations between memory scanning speed, speech recognition, and language were also investigated. Methods: Twenty-five prelingually deaf, early implanted children (age, 8-17 yr) with cochlear implants and 25 normal-hearing peers completed the Wechsler Intelligence Scale for Children, Fifth Edition, Letter-Number Sequencing (LNS) working memory task. Timing measures were made for response latency and average pause duration between letters/numbers recalled during the task. Participants also completed measures of speech recognition, vocabulary, and language comprehension. Results: Children with cochlear implants had longer pause durations than normal-hearing peers during three-span LNS sequences, but the groups did not differ in response latencies or in pause durations during two-span LNS sequences. In the sample of cochlear implant users, poorer speech recognition was correlated with longer pause durations during two-span sequences, whereas poorer vocabulary and weaker language comprehension were correlated with longer response latencies during two-span sequences. Response latencies and pause durations were unrelated to language in the normal-hearing sample. Conclusion: Children with cochlear implants have slower verbal working memory scanning speed than children with normal hearing. More robust phonological-lexical representations of language in memory may facilitate faster memory scanning speed and better working memory in cochlear implant users.Item Stereocilia morphogenesis and maintenance through regulation of actin stability(Elsevier, 2017-05) McGrath, Jamis; Roy, Pallabi; Perrin, Benjamin J.; Biology, School of ScienceStereocilia are actin-based protrusions on auditory and vestibular sensory cells that are required for hearing and balance. They convert physical force from sound, head movement or gravity into an electrical signal, a process that is called mechanoelectrical transduction. This function depends on the ability of sensory cells to grow stereocilia of defined lengths. These protrusions form a bundle with a highly precise geometry that is required to detect nanoscale movements encountered in the inner ear. Congenital or progressive stereocilia degeneration causes hearing loss. Thus, understanding stereocilia hair bundle structure, development, and maintenance is pivotal to understanding the pathogenesis of deafness. Stereocilia cores are made from a tightly packed array of parallel, crosslinked actin filaments, the length and stability of which are regulated in part by myosin motors, actin crosslinkers and capping proteins. This review aims to describe stereocilia actin regulation in the context of an emerging "tip turnover" model where actin assembles and disassembles at stereocilia tips while the remainder of the core is exceptionally stable.