- Browse by Author
Browsing by Author "Hosler, Ryan"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Electronic Voting Technology Inspired Interactive Teaching and Learning Pedagogy and Curriculum Development for Cybersecurity Education(Springer, 2021-07) Hosler, Ryan; Zou, Xukai; Bishop, Matt; Computer and Information Science, School of ScienceCybersecurity is becoming increasingly important to individuals and society alike. However, due to its theoretical and practical complexity, keeping students interested in the foundations of cybersecurity is a challenge. One way to excite such interest is to tie it to current events, for example elections. Elections are important to both individuals and society, and typically dominate much of the news before and during the election. We are developing a curriculum based on elections and, in particular, an electronic voting protocol. Basing the curriculum on an electronic voting framework allows one to teach critical cybersecurity concepts such as authentication, privacy, secrecy, access control, encryption, and the role of non-technical factors such as policies and laws in cybersecurity, which must include societal and human factors. Student-centered interactions and projects allow them to apply the concepts, thereby reinforcing their learning.Item GAN-inspired Defense Against Backdoor Attack on Federated Learning Systems(IEEE, 2023-09) Sundar, Agnideven Palanisamy; Li, Feng; Zou, Xukai; Gao, Tianchong; Hosler, Ryan; Computer Science, Luddy School of Informatics, Computing, and EngineeringFederated Learning (FL) provides an opportunity for clients with limited data resources to combine and build better Machine Learning models without compromising their privacy. But aggregating contributions from various clients implies that the errors present in some clients’ resources will also get propagated to all the clients through the combined model. Malicious entities leverage this negative factor to disrupt the normal functioning of the FL system for their gain. A backdoor attack is one such attack where the malicious entities act as clients and implant a small trigger into the global model. Once implanted, the model performs the attacker desired task in the presence of the trigger but acts benignly otherwise. In this paper, we build a GAN-inspired defense mechanism that can detect and defend against the presence of such backdoor triggers. The unavailability of labeled benign and backdoored models has prevented researchers from building detection classifiers. We tackle this problem by utilizing the clients as Generators to construct the required dataset. We place the Discriminator on the server-side, which acts as a backdoored model detecting binary classifier. We experimentally prove the proficiency of our approach with the image-based non-IID datasets, CIFAR10 and CelebA. Our prediction probability-based defense mechanism successfully removes all the influence of backdoors from the global model.Item Hardware Speculation Vulnerabilities and Mitigations(IEEE, 2021-10) Swearingen, Nathan; Hosler, Ryan; Zou, Xukai; Computer and Information Science, School of ScienceThis paper will discuss speculation vulnerabilities, which arise from hardware speculation, an optimization technique. Unlike many other types of vulnerabilities, these are very difficult to patch completely, and there are techniques developed to mitigate them. We will look at many of the variants of this type of vulnerability. We will look at the techniques mitigating those vulnerabilities and the effectiveness and scope of each. Finally, we will compare and evaluate different vulnerabilities and mitigation techniques and recommend how various mitigation techniques apply to different situations.Item Learning Discriminative Features for Adversarial Robustness(IEEE Xplore, 2022-04) Hosler, Ryan; Phillips, Tyler; Yu, Xiaoyuan; Sundar, Agnideven; Zou, Xukai; Li, Feng; Computer and Information Science, School of ScienceDeep Learning models have shown incredible image classification capabilities that extend beyond humans. However, they remain susceptible to image perturbations that a human could not perceive. A slightly modified input, known as an Adversarial Example, will result in drastically different model behavior. The use of Adversarial Machine Learning to generate Adversarial Examples remains a security threat in the field of Deep Learning. Hence, defending against such attacks is a studied field of Deep Learning Security. In this paper, we present the Adversarial Robustness of discriminative loss functions. Such loss functions specialize in either inter-class or intra-class compactness. Therefore, generating an Adversarial Example should be more difficult since the decision barrier between different classes will be more significant. We conducted White-Box and Black-Box attacks on Deep Learning models trained with different discriminative loss functions to test this. Moreover, each discriminative loss function will be optimized with and without Adversarial Robustness in mind. From our experimentation, we found White-Box attacks to be effective against all models, even those trained for Adversarial Robustness, with varying degrees of effectiveness. However, state-of-the-art Deep Learning models, such as Arcface, will show significant Adversarial Robustness against Black-Box attacks while paired with adversarial defense methods. Moreover, by exploring Black-Box attacks, we demonstrate the transferability of Adversarial Examples while using surrogate models optimized with different discriminative loss functions.Item Low Cost Gunshot Detection using Deep Learning on the Raspberry Pi(IEEE, 2019-12) Morehead, Alex; Ogden, Lauren; Magee, Gabe; Hosler, Ryan; White, Bruce; Mohler, George; Computer and Information Science, School of ScienceMany cities using gunshot detection technology depend on expensive systems that ultimately rely on humans differentiating between gunshots and non-gunshots, such as ShotSpotter. Thus, a scalable gunshot detection system that is low in cost and high in accuracy would be advantageous for a variety of cities across the globe, in that it would favorably promote the delegation of tasks typically worked by humans to machines. A repository of audio data was created from sound clips collected from online audio databases as well as from clips recorded using a USB microphone in residential areas and at a gun range. One-dimensional as well as two-dimensional convolutional neural networks were then trained on this sound data, and spectrograms created from this sound data, to recognize gunshots. These models were deployed to a Raspberry Pi 3 Model B+ with a short message service modem and a USB microphone attached, using a software pipeline to continuously analyze discrete two-second chunks of audio and alert a set of phone numbers if a gunshot is detected in that chunk. Testing found that a majority-rules ensemble of our one-dimensional and two-dimensional models fared best, with an accuracy above 99% on validation data as well as when distinguishing gunshots from fireworks. Besides increasing the safety standards for a city's residents, the findings generated by this research project expand the current state of knowledge regarding sound-based applications of convolutional neural networks.Item Towards Representation Learning for Robust Network Intrusion Detection Systems(2024-05) Hosler, Ryan; Zou, Xukai; Li, Feng; Tsechpenakis, Gavriil; Durresi, Arjan; Hu, QinThe most cost-effective method for cybersecurity defense is prevention. Ideally, before a malicious actor steals information or affects the functionality of a network, a Network Intrusion Detection System (NIDS) will identify and allow for a complete prevention of an attack. For this reason, there are commercial availabilities for rule-based NIDS which will use a packet sniffer to monitor all incoming network traffic for potential intrusions. However, such a NIDS will only work on known intrusions, therefore, researchers have devised sophisticated Deep Learning methods for detecting malicious network activity. By using statistical features from network flows, such as packet count, connection duration, flow bytes per second, etc., a Machine Learning or Deep Learning NIDS may identify an advanced attack that would otherwise bypass a rule-based NIDS. For this research, the presented work will develop novel applications of Deep Learning for NIDS development. Specifically, an image embedding algorithms will be adapted to this domain. Moreover, novel methods for representing network traffic as a graph and applying Deep Graph Representation Learning algorithms for an NIDS will be considered. When compared to the existing state-of-the-art methods within NIDS literature, the methods developed in the research manage to outperform them on numerous Network Traffic Datasets. Furthermore, an NIDS was deployed and successfully configured to a live network environment. Another domain in which this research is applied to is Android Malware Detection. By analyzing network traffic produced by either a benign or malicious Android Application, current research has failed to accurately detect Android Malware. Instead, they rely on features which are extracted from the APK file itself. Therefore, this research presents a NIDS inspired Graph-Based model which demonstrably distinguishes benign and malicious applications through analysis of network traffic alone, which outperforms existing sophisticated malware detection frameworks.Item Unsupervised Deep Learning for an Image Based Network Intrusion Detection System(IEEE, 2023-12) Hosler, Ryan; Sundar, Agnideven; Zou, Xukai; Li, Feng; Gao, Tianchong; Computer and Information Science, Purdue School of ScienceThe most cost-effective method of cybersecurity is prevention. Therefore, organizations and individuals utilize Network Intrusion Detection Systems (NIDS) to inspect network flow for potential intrusions. However, Deep Learning based NIDS still struggle with high false alarm rates and detecting novel and unseen attacks. Therefore, in this paper, we propose a novel NIDS framework based on generating images from feature vectors and applying Unsupervised Deep Learning. For evaluation, we apply this method on four publicly available datasets and have demonstrated an accuracy improvement of up to 8.25 % when compared to Deep Learning models applied to the original feature vectors.