- Browse by Subject
Browsing by Subject "blind and visually impaired (BVI)"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Fast and Discreet access to web services for the Blind through Screenless Browsing(Office of the Vice Chancellor for Research, 2016-04-08) Bolchini, Davide; Abhishek Dara, Joe; Bhat, Dhanashree; Pachhapukur, Shilpa; Chamboneth, YhareliWeb services on our smartphones have become an integral part of our daily lives. Services like Google Maps and Yelp have helped us explore the world better. However, the blind and visually impaired (BVI) spend unnecessary cognitive and mechanical effort navigating complex menus displayed on a mobile device before they can locate and access the content of their interest. More direct access may happen via voice-based services (e.g., S iri ), but at the cost of breaking privacy and social boundaries. To combat this issue, we propose Screenless Browsing : combining hand gesture recognition with aural navigation patterns that enable the BVI to quickly navigate aural menus through nimble, discrete hand movements. We propose to decouple the friction-prone mechanical interaction with a mobile display from the navigation experience. We demonstrate our approach by: (1) Introducing novel aural browsing menus that combine web content with binary splitting, dynamic sorting and playlists to accelerate navigation across collections; (2) Mapping aural menu navigation to the robust and simple vocabulary of hand movements enabled by M yo , an off-the-shelf muscle-controlled armband; (3) Reify our approach by iteratively prototyping Screenless Browsing of mobile applications for the BVI; (4) Conduct a user study to assess the limits and potential of our approach with participants from the Indiana School for the Blind and Visually Impaired (ISBVI). We believe that the ability to access web services on the move without taking the phone out of the pocket will empower the BVI to navigate and explore places effectively. Our work exemplifies a novel way to to reduce unwanted friction with the device and maximize the content experience for the BVI.Item SEMANTIC MAPPING OF STEM CONTENTS FOR AURAL REPRESENTATION USING LITERATURE MINING(Office of the Vice Chancellor for Research, 2012-04-13) Bharadwaj, Venkatesh; Palakal, Mathew; Mannheimer, StevenAs STEM education increasingly relies on illustrations, animations and video to communicate complex concepts, blind and visually impaired (BVI) students are increasingly left behind. However, tablet computers and other digital technologies offer the potential for a sound-based solution that leverages the ability of BVI students to “think aurally” beyond simple spoken terminology. Previous work has shown that non-verbal sound can improve educational outcomes for BVI students. The challenge is translating science concepts that may be essentially soundless (e.g. photosynthesis or cumulus clouds) into sounds that communicate the component ideas of a concept. One key is to consider any science concept as a process or activity with actions and actors, and to identify sounds that refer to them. Our research focuses on computational strategies for analyzing the sentences used in standard K-12 textbooks to define or describe any given science concept-activity, and generate a semantic sequence of words which correlates to sounds that can best portray or embody them. This is done with the help of Natural Language Processing (NLP) tools in combination with a newly developed Information Extraction (IE) algorithm. Because each word in a semantic sequence can potentially correlate to multiple sounds, it is necessary to find a dynamic path connecting the list of sounds that represent a word sequence in the context of the given science process or categorical domain. For example, there are multiple sounds associated with the basic concept “water:” e.g. splashing, pouring, drops dripping. But in the context of “precipitation” dripping is most relevant. The algorithm to identify the best concept-to-sound correlations is a newly developed, self-learning and adaptive algorithm. This research supports, and is informed by, experiments in aural pedagogy conducted at Indiana School of Blind and Visually Impaired. Our long-term goal is the generation of a language of non-verbal sounds.