- Browse by Author
Browsing by Author "Hill, James H."
Now showing 1 - 10 of 21
Results Per Page
Sort Options
Item AUTO-GENERATING MODELS FROM THEIR SEMANTICS AND CONSTRAINTS(Office of the Vice Chancellor for Research, 2012-04-13) Pati, Tanumoy; Hill, James H.Model-Driven Engineering (MDE) facilitates building solutions in many en-terprise application domains through the systematic use of graphical lan-guages called domain-specific modeling languages (DSMLs). MDE tools, such as the Generic Modeling Environment (GME) and the Generic Eclipse Modeling System (GEMS), enable end-users to rapidly create such custom DSMLs. When DSMLs are coupled with constraint solvers, it is possible for DSML end-users to auto-generate solutions (i.e., models) based on the goals of the constraint solver (e.g., finding the optimal deployment for a set of software components given resource availability and resource needs). One requirement of using a constraint solver with a DSML, however, is that mod-elers have to create an initial model, also known as a “partial model”. This implies that it is the end-users responsibility to (1) understand how to use the DSML and (2) understand when they have created an appropriate partial model that is a candidate for completion using a constraint solver. Our research therefore focuses on addressing the two problems men-tioned above. We address the problems by analyzing the semantics and con-straints of the DSML (i.e., the meta-model). Based on our analysis, we then auto-generate as much of the model until we reach a point that requires us-er intervention. At that point, we present a set of operations (or moves) to the user and continue the process until the model is complete, or is solvable using a constraint solver.Item Computer Program Instrumentation Using Reservoir Sampling & Pin++(2019-08) Upp, Brandon E.; Hill, James H.; Tuceryan, Mihran; Raje, Rajeev R.This thesis investigates techniques for improving real-time software instrumentation techniques of software systems. In particular, this thesis investigates two aspects of this real-time software instrumentation. First, this thesis investigates techniques for achieving different levels of visibility (i.e., ensuring all parts of a system are represented, or visible, in final results) into a software system without compromising software system performance. Secondly, this thesis investigates how using multi-core computing can be used to further reduce instrumentation overhead. The results of this research show that reservoir sampling can be used to reduce instrumentation overhead. Reservoir sampling at a rate of 20%, combined with parallelized disk I/O, added 34.1% additional overhead on a four-core machine, and only 9.9% additional overhead on a sixty-four core machine while also providing the desired system visibility. Additionally, this work can be used to further improve the performance of real-time distributed software instrumentation.Item A Conceptual Framework for Distributed Software Quality Network(2019-08) Patil, Anushka H.; Hill, James H.; Raje, Rajeev R.; Tuceryan, MihranThe advancement in technology has revolutionized the role of software in recent years. Software usage is practically found in all areas of the industry and has become a prime factor in the overall working of the companies. Simultaneously with an increase in the utilization of software, the software quality assurance parameters have become more crucial and complex. Currently the quality measurement approaches, standards, and models that are applied in the software industry are extremely divergent. Many a time the correct approach will wind up to be a combination of di erent concepts and techniques from di erent software assurance approaches [1]. Thus, having a platform that provides a single workspace for incorporating multiple software quality assurance approaches will ease the overall software quality process. In this thesis we have proposed a theoretical framework for distributed software quality assurance, which will be able to continuously monitor a source code repository; create a snapshot of the system for a given commit (both past and present); the snapshot can be used to create a multi-granular blockchain of the system and its metrics (i.e.,metadata) which we believe will let the tool developers and vendors participate continuously in assuring quality and security of systems and in the process be accessible when required while being rewarded for their services.Item CORBA-JS: An Open-Standards Framework for Distributed Object Computing over the Web(Office of the Vice Chancellor for Research, 2013-04-05) Parulekar, Tejal B.; Feiock, Dennis C.; Hill, James H.Distributed object computing (DOC) is a well-established software engineering paradigm for implementing distributed real-time and embedded (DRE) systems, such as real-time monitoring systems. Likewise, CORBA is a well-established DOC open-standard used in DRE systems. Due to many technological limitations, DOC was traditionally unavailable in Web-based applications (i.e., stateful applications that communicate over HTTP, and are accessible via a Web browser) without the use of proprietary, custom technologies. The problem with using proprietary, custom technology is it creates fragmentation in the solution space where some solutions are not available to all end-users (e.g., Web sites that only work within a certain Web browser because of the used technology). With the advent of HTML5 and WebSockets, which is an open-standard for enabling two-way communication over HTTP, DOC now has the necessary technological foundations to be realized within Web applications without the use of proprietary, custom technologies. To date, however, no researchers have attempted to apply DOC over HTTP using well-established DOC open-standards, such as CORBA. This research therefore is an initial investigation into implementing CORBA atop of HTML5 and WebSockets. As part of this research, we are investigating the challenges in realizing the solution, and proposing ways to improve the target programming languages and CORBA specification. Doing so will enable developers to create feature-rich real-time Web applications that improve upon current state-of-the-art approaches, e.g., Asynchronous XML and JavaScript (AJAX), that are resource intensive (e.g., use a lot of CPU, network bandwidth, and memory) and hard to program.Item Enabling Real Time Instrumentation Using Reservoir Sampling and Binpacking(2023-05) Meruga, Sai Pavan Kumar; Hill, James H.; Durresi, Arjan; Zheng, Jiang YuThis thesis investigates the overhead added by reservoir sampling algorithm at different levels of granularity in real-time instrumentation of a distributed software systems. Firstly, this thesis not only discusses the inconsistencies found in the implementation of the reservoir sampling pintool in paper [ 1 ] but also provides the correct implementation. Secondly, this thesis provides the design and implementation of pintools for different level of granularities i.e., thread level, image level and routine level. Additionally, we provide quantitative comparison of performance for different sampling techniques (including reservoir sampling) at different levels of granularity. Based on the insights obtained from the empirical results, to enable real time instrumentation, we need to scale and manage the resources in the best way possible. To scale the reservoir sampling algorithm on a real time software system we integrate the traditional bin packing approach with the instrumentation in such a way that there is a decrease in the memory usage and improve the performance. The results of this research show that percentage difference between overhead added by Reservoir and Constant Sampling at a Image level granularity is 1.74%, at a Routine level granularity is 0.3% percent, at a Thread level granularity is 0.035%. Additionally, when we use bin packing technique along with reservoir sampling it normalizes the memory usage/performance runtime for Reservoir Sampling across multiple threads and different system visibility levels.Item The Feasibility of a Using a Smart Button Mobile Health System to Self-Track Medication Adherence and Deliver Tailored Short Message Service Text Message Feedback(JMIR, 2019-06-25) Bartlett Ellis, Rebecca J.; Hill, James H.; Kerley, K. Denise; Sinha, Arjun; Ganci, Aaron; Russell, Cynthia L.; School of NursingBACKGROUND: As many as 50% of people experience medication nonadherence, yet studies for detecting nonadherence and delivering real-time interventions to improve adherence are lacking. Mobile health (mHealth) technologies show promise to track and support medication adherence. OBJECTIVE: The study aimed to evaluate the feasibility and acceptability of using an mHealth system for medication adherence tracking and intervention delivery. The mHealth system comprises a smart button device to self-track medication taking, a companion smartphone app, a computer algorithm used to determine adherence and then deliver a standard or tailored SMS (short message service) text message on the basis of timing of medication taking. Standard SMS text messages indicated that the smartphone app registered the button press, whereas tailored SMS text messages encouraged habit formation and systems thinking on the basis of the timing the medications were taken. METHODS: A convenience sample of 5 adults with chronic kidney disease (CKD), who were prescribed antihypertensive medication, participated in a 52-day longitudinal study. The study was conducted in 3 phases, with a standard SMS text message sent in phases 1 (study days 1-14) and 3 (study days 46-52) and tailored SMS text messages sent during phase 2 (study days 15-45) in response to participant medication self-tracking. Medication adherence was measured using: (1) the smart button and (2) electronic medication monitoring caps. Concordance between these 2 methods was evaluated using percentage of measurements made on the same day and occurring within ±5 min of one another. Acceptability was evaluated using qualitative feedback from participants. RESULTS: A total of 5 patients with CKD, stages 1-4, were enrolled in the study, with the majority being men (60%), white (80%), and Hispanic/Latino (40%) of middle age (52.6 years, SD 22.49; range 20-70). The mHealth system was successfully initiated in the clinic setting for all enrolled participants. Of the expected 260 data points, 36.5% (n=95) were recorded with the smart button and 76.2% (n=198) with electronic monitoring. Concordant events (n=94), in which events were recorded with both the smart button and electronic monitoring, occurred 47% of the time and 58% of these events occurred within ±5 min of one another. Participant comments suggested SMS text messages were encouraging. CONCLUSIONS: It was feasible to recruit participants in the clinic setting for an mHealth study, and our system was successfully initiated for all enrolled participants. The smart button is an innovative way to self-report adherence data, including date and timing of medication taking, which were not previously available from measures that rely on recall of adherence. Although the selected smart button had poor concordance with electronic monitoring caps, participants were willing to use it to self-track medication adherence, and they found the mHealth system acceptable to use in most cases.Item Identifying and documenting false positive patterns generated by static code analysis tools(2017-07-18) Reynolds, Zachary P.; Hill, James H.Static code analysis tools are known to flag a large number of false positives. A false positive is a warning message generated by a static code analysis tool for a location in the source code that does not have any known problems. This thesis presents our approach and results in identifying and documenting false positives generated by static code analysis tools. The goal of our study was to understand the different kinds of false positives generated so we can (1) automatically determine if a warning message from a static code analysis tool truly indicates an error, and (2) reduce the number of false positives developers must triage. We used two open-source tools and one commercial tool in our study. Our approach led to a hierarchy of 14 core false positive patterns, with some patterns appearing in multiple variations. We implemented checkers to identify the code structures of false positive patterns and to eliminate them from the output of the tools. Preliminary results showed that we were able to reduce the number of warnings by 14.0%-99.9% with a precision of 94.2%-100.0% by applying our false positive filters in different cases.Item MDE-URDS-A Mobile Device Enabled Service Discovery System(2011-08-16) Pradhan, Ketaki A.; Raje, Rajeev; Tuceryan, Mihran; Hill, James H.Component-Based Software Development (CSBD) has gained widespread importance in recent times, due to its wide-scale applicability in software development. System developers can now pick and choose from the pre-existing components to suit their requirements in order to build their system. For the purpose of developing a quality-aware system, finding the suitable components offering services is an essential and critical step. Hence, Service Discovery is an important step in the development of systems composed from already existing quality-aware software services. Currently, there is a plethora of new-age devices, such as PDAs, and cell phones that automate daily activities and provide a pervasive connectivity to users. The special characteristics of these devices (e.g., mobility, heterogeneity) make them as attractive choices to host services. Hence, they need to be considered and integrated in the service discovery process. However, due to their limitations of battery life, intermittent connectivity and processing capabilities this task is not a simple one. This research addresses this challenge of including resource constrained devices by enhancing the UniFrame Resource Discovery System (URDS) architecture. This enhanced architecture is called Mobile Device Enabled Service Discovery System (MDE-URDS). The experimental validation of the MDE-URDS suggests that it is a scalable and quality-aware system, handling the limitations of mobile devices using existing and well established algorithms and protocols such as Mobile IP.Item Pin++: An Object-oriented Framework for Writing Pintools(ACM, 2015) Hill, James H.; Feiock, Dennis C.; Department of Computer & Information Science, School of ScienceThis paper presents a framework named Pin++. Pin++ is an object-oriented framework that uses template metaprogramming to implement Pintools, which are analysis tools for the dynamic binary instrumentation tool named Pin. The goal of Pin++ is to simplify programming a Pintool and promote reuse of its components across different Pintools. Our results show that Pintools implemented using Pin++ can have a 54% reduction in complexity, increase its modularity, and up to 60% reduction in instrumentation overhead.Item Qualitative and Quantitative Evaluation of Static Code Analysis Tools(Office of the Vice Chancellor for Research, 2013-04-05) Velicheti, Lakshmi Manohar Rao; Feiock, Dennis C.; Raje, Rajeev R.; Hill, James H.Static code analysis (SCA) is a methodology of detecting errors in programs without actually compiling the source code to binary format and executing it on a machine. The main goal of a SCA tool is to aid developers in quickly identifying errors that can jeopardize the security and integrity of the program. With the vast array of SCA tools available, each specializing in particular languages, error types, and detection methodologies, choosing the optimal tool(s) can be a daunting task for any software developer, or organization. This, however, is not a problem associated only with SCA tools, but applies to any application domain where many tools exist and a selection of a subset of these tools is needed for effectively tackling a given problem. To address this fundamental challenge with selecting the most appropriate SCA tool for a particular problem, this research is performing a comprehensive study of different available SCA tool, both commercial and open-source. The end goal of this study is to not only evaluate how different SCA tools perform with respect to locating specific errors in source code (i.e., the quality of the tool), but to model the behavior of each SCA tool using quantitative metrics gathered from the source code, such as source lines of code (SLOC), cyclometic complexity, and function points. The behavioral model can then be used to prescreen existing (and new) source code, and select the most appropriate SCA tool, or set of SCA tools, that can identify the most errors in the source code undergoing analysis.
- «
- 1 (current)
- 2
- 3
- »