- Browse by Author
Browsing by Author "Bharadwaj, Venkatesh"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Aural Mapping of STEM Concepts Using Literature Mining(2013-03-06) Bharadwaj, Venkatesh; Palakal, Mathew J.; Raje, Rajeev; Xia, YuniRecent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.Item SEMANTIC MAPPING OF STEM CONTENTS FOR AURAL REPRESENTATION USING LITERATURE MINING(Office of the Vice Chancellor for Research, 2012-04-13) Bharadwaj, Venkatesh; Palakal, Mathew; Mannheimer, StevenAs STEM education increasingly relies on illustrations, animations and video to communicate complex concepts, blind and visually impaired (BVI) students are increasingly left behind. However, tablet computers and other digital technologies offer the potential for a sound-based solution that leverages the ability of BVI students to “think aurally” beyond simple spoken terminology. Previous work has shown that non-verbal sound can improve educational outcomes for BVI students. The challenge is translating science concepts that may be essentially soundless (e.g. photosynthesis or cumulus clouds) into sounds that communicate the component ideas of a concept. One key is to consider any science concept as a process or activity with actions and actors, and to identify sounds that refer to them. Our research focuses on computational strategies for analyzing the sentences used in standard K-12 textbooks to define or describe any given science concept-activity, and generate a semantic sequence of words which correlates to sounds that can best portray or embody them. This is done with the help of Natural Language Processing (NLP) tools in combination with a newly developed Information Extraction (IE) algorithm. Because each word in a semantic sequence can potentially correlate to multiple sounds, it is necessary to find a dynamic path connecting the list of sounds that represent a word sequence in the context of the given science process or categorical domain. For example, there are multiple sounds associated with the basic concept “water:” e.g. splashing, pouring, drops dripping. But in the context of “precipitation” dripping is most relevant. The algorithm to identify the best concept-to-sound correlations is a newly developed, self-learning and adaptive algorithm. This research supports, and is informed by, experiments in aural pedagogy conducted at Indiana School of Blind and Visually Impaired. Our long-term goal is the generation of a language of non-verbal sounds.