ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Assistive computer technology"

Now showing 1 - 4 of 4
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Aural Mapping of STEM Concepts Using Literature Mining
    (2013-03-06) Bharadwaj, Venkatesh; Palakal, Mathew J.; Raje, Rajeev; Xia, Yuni
    Recent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.
  • Loading...
    Thumbnail Image
    Item
    DESIGN FOUNDATIONS FOR CONTENT-RICH ACOUSTIC INTERFACES: INVESTIGATING AUDEMES AS REFERENTIAL NON-SPEECH AUDIO CUES
    (2012-11-16) Ferati, Mexhid Adem; Pfaff, Mark; Bolchini, Davide; Lu, Amy Shirong; Palakal, Mathew J.
    To access interactive systems, blind and visually impaired users can leverage their auditory senses by using non-speech sounds. The current structure of non-speech sounds, however, is geared toward conveying user interface operations (e.g., opening a file) rather than large theme-based information (e.g., a history passage) and, thus, is ill-suited to signify the complex meanings of primary learning material (e.g., books and websites). In order to address this problem, this dissertation introduces audemes, a new category of non-speech sounds, whose semiotic structure and flexibility open new horizons for facilitating the education of blind and visually impaired students. An experiment with 21 students from the Indiana School for the Blind and Visually Impaired (ISBVI) supports the hypothesis that audemes increase the retention of theme-based information. By acting as memory catalysts, audemes can play an important role in enhancing the aural interaction and navigation in future sound-based user interfaces. For this dissertation, I designed an Acoustic EDutainment INterface (AEDIN) that integrates audemes as a way by which to vividly anticipate text-to-speech theme-based information and, thus, act as innovative aural covers. The results of two iterative usability evaluations with total of 20 blind and visually impaired participants showed that AEDIN is a highly usable and enjoyable acoustic interface. Yet, designing well-formed audemes remains an ad hoc process because audeme creators can only rely on their intuition to generate meaningful and memorable sounds. In order to address this problem, this dissertation presents three experiments, each with 10 blind and visually impaired participants. The goal was to examine the optimal combination of audeme attributes, which can be used to facilitate accurate recognitions of audeme meanings. This work led to the creation of seven basic guidelines that can be used to design well-formed audemes. An interactive application tool (ASCOLTA: Advanced Support and Creation-Oriented Library Tool for Audemes) operationalized these guidelines to support individuals without an audio background in designing well-formed audemes. An informal evaluation conducted with three teachers from the ISBVI, supports the hypothesis that ASCOLTA is a useful tool by which to facilitate the integration of audemes into the teaching environment.
  • No Thumbnail Available
    Item
    Digital Rights Management: Pitfalls and Possibilities for People with Disabilities
    (University of Michigan, 2007-01) Kramer, Elsa F.
    This paper argues that electronic barriers intended to protect intellectual property can prevent equal access to digital materials by readers with visual or hearing disabilities, and thus deny those readers their fair-use rights. It provides a basic overview of copyright law, summarizes publishers’ concerns about intellectual property, and discusses information access by users with special needs to explain why digital rights management (DRM) is used, how it can interfere with access and fair use, and some ways those problems are being addressed.
  • Loading...
    Thumbnail Image
    Item
    Eyes-free interaction with aural user interfaces
    (2015-04-11) Rohani Ghahari, Romisa; Bolchini, Davide
    Existing web applications force users to focus their visual attentions on mobile devices, while browsing content and services on the go (e.g., while walking or driving). To support mobile, eyes-free web browsing and minimize interaction with devices, designers can leverage the auditory channel. Whereas acoustic interfaces have proven to be effective in regard to reducing visual attention, a perplexing challenge exists in designing aural information architectures for the web because of its non-linear structure. To address this problem, we introduce and evaluate techniques to remodel existing information architectures as "playlists" of web content - aural flows. The use of aural flows in mobile web browsing can be seen in ANFORA News, a semi-aural mobile site designed to facilitate browsing large collections of news stories. An exploratory study involving frequent news readers (n=20) investigated the usability and navigation experiences with ANFORA News in a mobile setting. The initial evidence suggests that aural flows are a promising paradigm for supporting eyes-free mobile navigation while on the go. Interacting with aural flows, however, requires users to select interface buttons, tethering visual attention to the mobile device even when it is unsafe. To reduce visual interaction with the screen, we also explore the use of simulated voice commands to control aural flows. In a study, 20 participants browsed aural flows either through a visual interface or with a visual interface augmented by voice commands. The results suggest that using voice commands decreases by half the time spent looking at the device, but yields similar walking speeds, system usability and cognitive effort ratings as using buttons. To test the potential of using aural flows in a higher distracting context, a study (n=60) was conducted in a driving simulation lab. Each participant drove through three driving scenario complexities: low, moderate and high. Within each driving complexity, the participants went through an alternative aural application exposure: no device, voice-controlled aural flows (ANFORADrive) or alternative solution on the market (Umano). The results suggest that voice-controlled aural flows do not affect distraction, overall safety, cognitive effort, driving performance or driving behavior when compared to the no device condition.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University