Department of Electrical and Computer Engineering Works

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 10 of 232
  • Item
    The Effect of Behavioral Probability Weighting in a Simultaneous Multi-Target Attacker-Defender Game
    (IEE, 2021) Abdallah, Mustafa; Cason, Timothy; Bagchi, Saurabh; Sundaram, Shreyas; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    We consider a security game in a setting consisting of two players (an attacker and a defender), each with a given budget to allocate towards attack and defense, respectively, of a set of nodes. Each node has a certain value to the attacker and the defender, along with a probability of being successfully compromised, which is a function of the investments in that node by both players. For such games, we characterize the optimal investment strategies by the players at the (unique) Nash Equilibrium. We then investigate the impacts of behavioral probability weighting on the investment strategies; such probability weighting, where humans overweight low probabilities and underweight high probabilities, has been identified by behavioral economists to be a common feature of human decision-making. We show via numerical experiments that behavioral decision-making by the defender causes the Nash Equilibrium investments in each node to change (where the defender overinvests in the high-value nodes and underinvests in the low-value nodes).
  • Item
    Natural Gamma Transmutation Studies
    (American Astronomical Society, 2021) Schubert, Peter J.; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    Beyond Earth’s magnetosphere is a spectrum of extra-gallactic energetic photons of mysterious origin. Called the “gamma fog” this source of gamma rays provides a unique opportunity to study neutron generation (from beryllium) and the use of such neutrons to transmute elements found in the lunar crust. There are also many commercial applications.
  • Item
    EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox
    (Science Publishing Group, 2021) Kalgaonkar, Priyank; El-Sharkawy, Mohamed; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    Intelligent edge devices with built-in processors vary widely in terms of capability and physical form to perform advanced Computer Vision (CV) tasks such as image classification and object detection, for example. With constant advances in the field of autonomous cars and UAVs, embedded systems and mobile devices, there has been an ever-growing demand for extremely efficient Artificial Neural Networks (ANN) for real-time inference on these smart edge devices with constrained computational resources. With unreliable network connections in remote regions and an added complexity of data transmission, it is of an utmost importance to capture and process data locally instead of sending the data to cloud servers for remote processing. Edge devices on the other hand, offer limited processing power due to their inexpensive hardware, and limited cooling and computational resources. In this paper, we propose a novel deep convolutional neural network architecture called EffCNet which is an improved and an efficient version of CondenseNet Convolutional Neural Network (CNN) for edge devices utilizing self-querying data augmentation and depthwise separable convolutional strategies to improve real-time inference performance as well as reduce the final trained model size, trainable parameters, and Floating-Point Operations (FLOPs) of EffCNet CNN. Furthermore, extensive supervised image classification analyses are conducted on two benchmarking datasets: CIFAR-10 and CIFAR-100, to verify real-time inference performance of our proposed CNN. Finally, we deploy these trained weights on NXP BlueBox which is an intelligent edge development platform designed for self-driving vehicles and UAVs, and conclusions will be extrapolated accordingly.
  • Item
    Actinide concentration from lunar regolith via hydrocyclone density separation
    (Longdom Publishing, 2021) Schubert, Peter J.; Kindomba, Eli; Hantzis, Connor; Conaway, Adam; Yeong, Haoyee; Littell, Steven; Palani, Sashindran; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    Beneficiation of regolith to concentrate the high-density ore fraction from the gangue can be accomplished through momentum transfer methods, such as ballistic deflection or cyclonic separation. This study explores the extraction of actinide-bearing minerals from lunar regolith based on the difference in apparent density between thorium-bearing minerals (e.g. ThO2 ρ=10) from silicates (e.g. SiO2 ρ=2.65). Thorium content in lunar regolith ranges from single-digit parts per million (ppm) to as high as 60 ppm. Concentrating thorium-bearing minerals is a required first step in the preparation of fission fuels for a nuclear reactor in which all of the radioactive operations are performed 380,000 km from the Earth’s biosphere. After comparison with ballistic deflection, cyclone separation with a non-volatile fluid carrier was chosen for further study. With sieving to separate particles by size, such a hydrocyclone can be used to efficiently separate the dense fraction from the lighter minerals. Design equations were used to fabricate an at-scale apparatus using water, iron particles, and glass beads as simulants. Results show the ability to effect a 2 to 5.4 % increase in dense fraction concentration each pass, such that 95% concentration requires between 50 and 100 passes, or a cascade of this many apparatuses. The selection of a suitable fluid for safe and low-mass transport to the Moon is part of a techno-economic analysis of the cost and infrastructure needed to produce highly-purified thorium minerals on the lunar surface.
  • Item
    On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems
    (MDPI, 2024-05-29) Nazat, Sazid; Arreche, Osvaldo; Abdallah, Mustafa; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    The recent advancements in autonomous driving come with the associated cybersecurity issue of compromising networks of autonomous vehicles (AVs), motivating the use of AI models for detecting anomalies on these networks. In this context, the usage of explainable AI (XAI) for explaining the behavior of these anomaly detection AI models is crucial. This work introduces a comprehensive framework to assess black-box XAI techniques for anomaly detection within AVs, facilitating the examination of both global and local XAI methods to elucidate the decisions made by XAI techniques that explain the behavior of AI models classifying anomalous AV behavior. By considering six evaluation metrics (descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness), the framework evaluates two well-known black-box XAI techniques, SHAP and LIME, involving applying XAI techniques to identify primary features crucial for anomaly classification, followed by extensive experiments assessing SHAP and LIME across the six metrics using two prevalent autonomous driving datasets, VeReMi and Sensor. This study advances the deployment of black-box XAI methods for real-world anomaly detection in autonomous driving systems, contributing valuable insights into the strengths and limitations of current black-box XAI methods within this critical domain.
  • Item
    Simulation of Authentication in Information-Processing Electronic Devices Based on Poisson Pulse Sequence Generators
    (MDPI, 2022) Maksymovych, Volodymyr; Nyemkova, Elena; Justice, Connie; Shabatura, Mariia; Harasymchuk, Oleh; Lakh, Yuriy; Rusynko, Morika; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    Poisson pulse sequence generators are quite well studied, have good statistical properties, are implemented both in software and hardware, but have not yet been used for the purpose of authentication. The work was devoted to modeling authenticators of information-processing electronic devices by creating a bit template simulator based on a Poisson pulse sequence generator (PPSG). The generated templates imitated an important property of real bit templates, which reflected the physical uniqueness of electronic devices, namely Hamming distances between arbitrary template pairs for the same device were much smaller than the distance between arbitrary template pairs for two different devices. The limits of the control code values were determined by setting the range of the average frequency values of the output pulse sequence with the Poisson distribution law. The specified parameters of the output pulse sequence were obtained due to the optimization of the parameters of the PPSG structural elements. A combination of pseudo-random sequences with the control code’s different values formed the bit template. The comparison of the Hamming distance between the standard and real-time templates with a given threshold value was used as a validation mechanism. The simulation experiment results confirmed the unambiguous authentication of devices. The simulation results also showed similarities with the real data obtained for the bit templates of personal computers’ own noise. The proposed model could be used for improving the cybersecurity of a corporate network as an additional factor in the authentication of information-processing electronic devices for which the measurement of noise with the required accuracy is not possible or significantly difficult.
  • Item
    SRAM design leveraging material properties of exploratory transistors
    (Elsevier, 2022) Gopinath, Anoop; Cochran, Zachary; Ytterdal, Trond; Rizkalla, Maher; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    While MOSFET miniaturization continues to face increased challenges related to process variations, supply voltage scaling and leakage currents, exploratory devices such as Graphene Nanoribbon Field Effect Transistors (GNRFET), Tunnel Field Effect Transistor (TFET), and Carbon Nanotube Field Effects Transistors (CNFET) could provide solutions for continued device scaling with better power/performance trade-offs. The GaN TFET used in this work is a heterojunction device, where the material properties of Ga and N result in a bandgap of 3.2eV and ultra-low IOFF current. Moreover, the properties of Ga and N result in a steep-switching device with a subthreshold swing of 30mV/decade. The structure of graphene material results in high conductivity of electrons, even at very high operating temperatures, making GNRFETs a high-performance device. However, graphene specific line-edge roughness can degrade performance, and thus process related variations can have a negative impact for GNRFETs. In CNFETs, the mobility and the velocity of the conducting carriers are functions of the carbon nanotube and the gate length. The length and the diameter of the carbon nanotube material adds parasitic capacitance and resistance to the model, contributing to slower speed of the device. In this paper, the impact on power, performance, and static noise margins (SNM) of the traditional single-port 6T-SRAM and modified single-port 8T-SRAMs designed using exploratory devices are analyzed with a set figure of merit (FOM), elucidating the material properties of the devices. The results obtained from this work show that GNRFET-based SRAM have very high performance with a worst-case memory access time of 27.7 ps for a 16x4-bit 4-word array while CNFET-based SRAM bitcell consume the lowest average power during read/write simulations at 3.84 µW, while TFET- based SRAM bitcell show the best overall average and static power consumption at 4.79 µW and 57.8 pW. A comparison of these exploratory devices with FinFET and planar CMOS showed that FinFET-based SRAM bitcell consumed the lowest static power at 39.8 pW and CMOS-based SRAM had the best read, write, and hold SNMs of 201 mV, 438 mV and 413 mV respectively.
  • Item
    Flexible and Scalable Annotation Tool to Develop Scene Understanding Datasets
    (National Science Foundation, 2022) Elahi, Md Fazle; Tian, Renran; Luo, Xiao; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    Recent progress in data-driven vision and language-based tasks demands developing training datasets enriched with multiple modalities representing human intelligence. The link between text and image data is one of the crucial modalities for developing AI models. The development process of such datasets in the video domain requires much effort from researchers and annotators (experts and non-experts). Researchers re-design annotation tools to extract knowledge from annotators to answer new research questions. The whole process repeats for each new question which is time consuming. However, since the last decade, there has been little change in how the researchers and annotators interact with the annotation process. We revisit the annotation workflow and propose a concept of an adaptable and scalable annotation tool. The concept emphasizes its users’ interactivity to make annotation process design seamless and efficient. Researchers can conveniently add newer modalities to or augment the extant datasets using the tool. The annotators can efficiently link free-form text to image objects. For conducting human-subject experiments on any scale, the tool supports the data collection for attaining group ground truth. We have conducted a case study using a prototype tool between two groups with the participation of 74 non-expert people. We find that the interactive linking of free-form text to image objects feels intuitive and evokes a thought process resulting in a high-quality annotation. The new design shows ≈ 35% improvement in the data annotation quality. On UX evaluation, we receive above-average positive feedback from 25 people regarding convenience, UI assistance, usability, and satisfaction.
  • Item
    mmFit: Low-Effort Personalized Fitness Monitoring Using Millimeter Wave
    (IEEE, 2022) Xie, Yucheng; Jiang, Ruizhe; Guo, Xiaonan; Wang, Yan; Cheng, Jerry; Chen, Yingying; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    There is a growing trend for people to perform work-outs at home due to the global pandemic of COVID-19 and the stay-at-home policy of many countries. Since a self-designed fitness plan often lacks professional guidance to achieve ideal outcomes, it is important to have an in-home fitness monitoring system that can track the exercise process of users. Traditional camera-based fitness monitoring may raise serious privacy concerns, while sensor-based methods require users to wear dedicated devices. Recently, researchers propose to utilize RF signals to enable non-intrusive fitness monitoring, but these approaches all require huge training efforts from users to achieve a satisfactory performance, especially when the system is used by multiple users (e.g., family members). In this work, we design and implement a fitness monitoring system using a single COTS mm Wave device. The proposed system integrates workout recognition, user identification, multi-user monitoring, and training effort reduction modules and makes them work together in a single system. In particular, we develop a domain adaptation framework to reduce the amount of training data collected from different domains via mitigating impacts caused by domain characteristics embedded in mm Wave signals. We also develop a GAN-assisted method to achieve better user identification and workout recognition when only limited training data from the same domain is available. We propose a unique spatialtemporal heatmap feature to achieve personalized workout recognition and develop a clustering-based method for concurrent workout monitoring. Extensive experiments with 14 typical workouts involving 11 participants demonstrate that our system can achieve 97% average workout recognition accuracy and 91% user identification accuracy.
  • Item
    AutoForecast: Automatic Time-Series Forecasting Model S
    (National Science Foundation, 2022) Abdallah, Mustafa; Rossi, Ryan; Mahadik, Kanak; Kim, Sungchul; Zhao, Handong; Bagchi, Saurabh; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    In this work, we develop techniques for fast automatic selection of the best forecasting model for a new unseen time-series dataset, without having to first train (or evaluate) all the models on the new time-series data to select the best one. In particular, we develop a forecasting meta-learning approach called AutoForecast that allows for the quick inference of the best time-series forecasting model for an unseen dataset. Our approach learns both forecasting models performances over time horizon of same dataset and task similarity across different datasets. The experiments demonstrate the effectiveness of the approach over state-of-the-art (SOTA) single and ensemble methods and several SOTA meta-learners (adapted to our problem) in terms of selecting better forecasting models (i.e., 2X gain) for unseen tasks for univariate and multivariate testbeds.