- Browse by Author
Browsing by Author "Raje, Rajeev"
Now showing 1 - 10 of 30
Results Per Page
Sort Options
Item Active geometric model : multi-compartment model-based segmentation & registration(2014-08-26) Mukherjee, Prateep; Tsechpenakis, Gavriil; Raje, Rajeev; Tuceryan, MihranWe present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.Item Adversarial Attacks and Defense Mechanisms to Improve Robustness of Deep Temporal Point Processes(2022-08) Khorshidi, Samira; Mohler, George; Al Hasan, Mohammad; Raje, Rajeev; Durresi, ArjanTemporal point processes (TPP) are mathematical approaches for modeling asynchronous event sequences by considering the temporal dependency of each event on past events and its instantaneous rate. Temporal point processes can model various problems, from earthquake aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis, infectious disease transmissions, and virus spread forecasting. In each of these cases, the entity’s behavior with the corresponding information is noted over time as an asynchronous event sequence, and the analysis is done using temporal point processes, which provides a means to define the generative mechanism of the sequence of events and ultimately predict events and investigate causality. Among point processes, Hawkes process as a stochastic point process is able to model a wide range of contagious and self-exciting patterns. One of Hawkes process’s well-known applications is predicting the evolution of viral processes on networks, which is an important problem in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used to predict viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks. Recent attempts have been made to use assortativity to address this shortcoming. This thesis illustrates how the evolution of such a viral process is sensitive to the underlying network’s structure. In Chapter 3 , we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution combined with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach, by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results highlight the importance of graphlets and identify a small collection of graphlets that may have the most significant influence over the viral processes on a network. Due to the flexibility and expressiveness of deep learning techniques, several neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the possible adversarial attacks and the robustness of such models regarding adversarial attacks and natural shocks to systems. Furthermore, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. In Chapter 4 , we propose several white-box and black-box adversarial attacks against deep temporal point processes. Additionally, we investigate the transferability of whitebox adversarial attacks against point processes modeled by deep neural networks, which are considered a more elevated risk. Extensive experiments confirm that neural point processes are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of predictive metrics and the effect of attacks on the underlying point process’s parameters. Expressly, adversarial attacks successfully transform the temporal Hawkes process regime from sub-critical to into a super-critical and manipulate the modeled parameters that is considered a risk against parametric modeling approaches. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes and Covid-19 pandemic dataset as an example. Considering the security vulnerability of deep-learning models, including deep temporal point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed algorithms that is despite the success of deep learning techniques in modeling temporal point processes. In Chapter 5 , we study the robustness of deep temporal point processes against several proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we investigate the effectiveness of adversarial training using universal adversarial samples in improving the robustness of the deep point processes. Additionally, we propose a general point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal point processes, to reduce the effect of adversarial attacks and acquire an empirically robust model. In this approach, unlike other computationally expensive approaches, there is no need for additional back-propagation in the training step, and no further network isrequired. Ultimately, we propose an adversarial detection framework that has been trained in the Generative Adversarial Network (GAN) manner and solely on clean training data. Finally, in Chapter 6 , we discuss implications of the research and future research directions.Item Analyzing and evaluating security features in software requirements(2016-10-28) Hayrapetian, Allenoush; Raje, RajeevSoftware requirements, for complex projects, often contain specifications of non-functional attributes (e.g., security-related features). The process of analyzing such requirements for standards compliance is laborious and error prone. Due to the inherent free-flowing nature of software requirements, it is tempting to apply Natural Language Processing (NLP) and Machine Learning (ML) based techniques for analyzing these documents. In this thesis, we propose a novel semi-automatic methodology that assesses the security requirements of the software system with respect to completeness and ambiguity, creating a bridge between the requirements documents and being in compliance. Security standards, e.g., those introduced by the ISO and OWASP, are compared against annotated software project documents for textual entailment relationships (NLP), and the results are used to train a neural network model (ML) for classifying security-based requirements. Hence, this approach aims to identify the appropriate structures that underlie software requirements documents. Once such structures are formalized and empirically validated, they will provide guidelines to software organizations for generating comprehensive and unambiguous requirements specification documents as related to security-oriented features. The proposed solution will assist organizations during the early phases of developing secure software and reduce overall development effort and costs.Item Aural Mapping of STEM Concepts Using Literature Mining(2013-03-06) Bharadwaj, Venkatesh; Palakal, Mathew J.; Raje, Rajeev; Xia, YuniRecent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.Item Auto-Generating Models From Their Semantics and Constraints(2013-08-20) Pati, Tanumoy; Hill, James H. (James Haswell); Raje, Rajeev; Al Hasan, MohammadDomain-specific models powered using domain-specific modeling languages are traditionally created manually by modelers. There exist model intelligence techniques, such as constraint solvers and model guidance, which alleviate challenges associated with manually creating models, however parts of the modeling process are still manual. Moreover, state-of-the-art model intelligence techniques are---in essence---reactive (i.e., invoked by the modeler). This thesis therefore provides two contributions to model-driven engineering research using domain-specific modeling language (DSML). First, it discusses how DSML semantic and constraint can enable proactive modeling, which is a form of model intelligence that foresees model transformations, automatically executes these model transformations, and prompts the modeler for assistance when necessary. Secondly, this thesis shows how we integrated proactive modeling into the Generic Modeling environment (GME). Our experience using proactive modeling shows that it can reduce modeling effort by both automatically generating required model elements, and by guiding modelers to select what actions should be executed on the model.Item Characterizing software components using evolutionary testing and path-guided analysis(2013-12-16) McNeany, Scott Edward; Hill, James H. (James Haswell); Raje, Rajeev; Al Hasan, Mohammad; Fang, ShiaofenEvolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.Item Collaborative detection of cyberbullying behavior in Twitter data(2017) Mangaonkar, Amrita; Raje, RajeevAs the size of Twitter© data is increasing, so are undesirable behaviors of its users. One such undesirable behavior is cyberbullying, which could lead to catastrophic consequences. Hence, it is critical to efficiently detect cyberbullying behavior by analyzing tweets, in real-time if possible. Prevalent approaches to identifying cyberbullying are mainly stand-alone, and thus, are time-consuming. This thesis proposes a new approach called distributed-collaborative approach for cyberbullying detection. It contains a network of detection nodes, each of which is independent and capable of classifying tweets it receives. These detection nodes collaborate with each other in case they need help in classifying a given tweet. The study empirically evaluates various collaborative patterns, and it assesses the performance of each pattern in detail. Results indicate an improvement in recall and precision of the detection mechanism over the stand- alone paradigm. Further, this research analyzes scalability of the approach by increasing the number of nodes in the network. The empirical results obtained from experimentation show that the system is scalable. The study performed also incorporates the experiments that analyze behavior distributed-collaborative approach in case of failures in the system. Additionally, the proposed thesis tests this approach on a different domain, such as politics, to explore the possibility of the generalization of results.Item A Compressed Data Collection System For Use In Wireless Sensor Networks(2013-03-06) Erratt, Newlyn S.; Liang, Yao; Raje, Rajeev; Tuceryan, MihranOne of the most common goals of a wireless sensor network is to collect sensor data. The goal of this thesis is to provide an easy to use and energy-e fficient system for deploying data collection sensor networks. There are numerous challenges associated with deploying a wireless sensor network for collection of sensor data; among these challenges are reducing energy consumption and the fact that users interested in collecting data may not be familiar with software design. This thesis presents a complete system, comprised of the Compression Data-stream Protocol and a general gateway for data collection in wireless sensor networks, which attempts to provide an easy to use, energy efficient and complete system for data collection in sensor networks. The Compressed Data-stream Protocol is a transport layer compression protocol with a primary goal, in this work, to reduce energy consumption. Energy consumption of the radio in wireless sensor network nodes is expensive and the Com-pressed Data-stream Protocol has been shown in simulations to reduce energy used on transmission and reception by around 26%. The general gateway has been designed in such a way as to make customization simple without requiring vast knowledge of sensor networks and software development. This, along with the modular nature of the Compressed Data-stream Protocol, enables the creation of an easy to deploy and easy to configure sensor network for data collection. Findings show that individual components work well and that the system as a whole performs without errors. This system, the components of which will eventually be released as open source, provides a platform for researchers purely interested in the data gathered to deploy a sensor network without being restricted to specific vendors of hardware.Item Decentralized and Partially Decentralized Multi-Agent Reinforcement Learning(2013-08-22) Tilak, Omkar Jayant; Mukhopadhyay, Snehasis; Si, Luo; Neville, Jennifer; Raje, Rajeev; Tuceryan, Mihran; Gorman, William J.Multi-agent systems consist of multiple agents that interact and coordinate with each other to work towards to certain goal. Multi-agent systems naturally arise in a variety of domains such as robotics, telecommunications, and economics. The dynamic and complex nature of these systems entails the agents to learn the optimal solutions on their own instead of following a pre-programmed strategy. Reinforcement learning provides a framework in which agents learn optimal behavior based on the response obtained from the environment. In this thesis, we propose various novel de- centralized, learning automaton based algorithms which can be employed by a group of interacting learning automata. We propose a completely decentralized version of the estimator algorithm. As compared to the completely centralized versions proposed before, this completely decentralized version proves to be a great improvement in terms of space complexity and convergence speed. The decentralized learning algorithm was applied; for the first time; to the domains of distributed object tracking and distributed watershed management. The results obtained by these experiments show the usefulness of the decentralized estimator algorithms to solve complex optimization problems. Taking inspiration from the completely decentralized learning algorithm, we propose the novel concept of partial decentralization. The partial decentralization bridges the gap between the completely decentralized and completely centralized algorithms and thus forms a comprehensive and continuous spectrum of multi-agent algorithms for the learning automata. To demonstrate the applicability of the partial decentralization, we employ a partially decentralized team of learning automata to control multi-agent Markov chains. More flexibility, expressiveness and flavor can be added to the partially decentralized framework by allowing different decentralized modules to engage in different types of games. We propose the novel framework of heterogeneous games of learning automata which allows the learning automata to engage in disparate games under the same formalism. We propose an algorithm to control the dynamic zero-sum games using heterogeneous games of learning automata.Item Design, development and experimentation of a discovery service with multi-level matching(2013-11-20) Pileththuwasan Gallege, Lahiru Sandakith; Raje, Rajeev; Hill, James H. (James Haswell); Tuceryan, MihranThe contribution of this thesis focuses on addressing the challenges of improving and integrating the UniFrame Discovery Service (URDS) and Multi-level Matching (MLM) concepts. The objective was to find enhancements for both URDS and MLM and address the need of a comprehensive discovery service which goes beyond simple attribute based matching. It presents a detailed discussion on developing an enhanced version of URDS with MLM (proURDS). After implementing proURDS, the thesis includes details of experiments with different deployments of URDS components and different configurations of MLM. The experiments and analysis were carried out using proURDS produced MLM contracts. The proURDS referred to a public dataset called QWS dataset. This dataset includes actual information of software components (i.e., web services), which were harvested from the Internet. The proURDS implements the different matching operations as independent operators at each level of matching (i.e., General, Syntactic, Semantic, Synchronization, and QoS). Finally, a case study was carried out with the deployed proURDS. The case study addresses real world component discovery requirements from the earth science domain. It uses the contracts collected from public portals which provide geographical and weather related data.
- «
- 1 (current)
- 2
- 3
- »