- Electrical & Computer Engineering Department Theses and Dissertations
Electrical & Computer Engineering Department Theses and Dissertations
Permanent URI for this collection
Information about the Purdue School of Engineering and Technology Graduate Degree Programs available at IUPUI can be found at: http://www.engr.iupui.edu/academics.shtml
Browse
Recent Submissions
Item Capacitorless Power Electronics Converters Using Integrated Planar Electro-Magnetics(2024-08) Kanakri, Haitham; Cipriano Dos Santos, Euzeli, Jr.; Rizkalla, Maher; Li, Lingxi; King, BrianThe short lifespan of capacitors in power electronics converters is a significant challenge. These capacitors, often electrolytic, are vital for voltage smoothing and frequency filtering. However, their susceptibility to heat, ripple current, and aging can lead to premature faults. This can cause issues like output voltage instability and short circuits, ultimately resulting in catastrophic failure and system shutdown. Capacitors are responsible for 30% of power electronics failures. To tackle this challenge, scientists, researchers, and engineers are exploring various approaches detailed in technical literature. These include exploring alternative capacitor technologies, implementing active and passive cooling solutions, and developing advanced monitoring techniques to predict and prevent failures. However, these solutions often come with drawbacks such as increased complexity, reduced efficiency, or higher upfront costs. Additionally, research in material science is ongoing to develop corrosion-resistant capacitors, but such devices are not readily available. This dissertation presents a capacitorless solution for dc-dc and dc-ac converters. The proposed solution involves harnessing parasitic elements and integrating them as intrinsic components in power converter technology. This approach holds the promise of enhancing power electronics reliability ratings, thereby facilitating breakthroughs in electric vehicles, compact power processing units, and renewable energy systems. The central scientific premise of this proposal is that the capacitance requirement in a power converter can be met by deliberately augmenting parasitic components. Our research hypothesis that incorporating high dielectric material-based thin-films, fabricated using nanotechnology, into planar magnetics will enable the development of a family of capacitorless electronic converters that do not rely on discrete capacitors. This innovative approach represents a departure from the traditional power converter schemes employed in industry. The first family of converters introduces a novel capacitorless solid-state power filter (SSPF) for single-phase dc-ac converters. The proposed configuration, comprising a planar transformer and an H-bridge converter operating at high frequency, generates sinusoidal ac voltage without relying on capacitors. Another innovative dc-ac inverter design is the twelve step six-level inverter, which does not incorporate capacitors in its structure. The second family of capacitorless topologies consists of non-isolated dc-dc converters, namely the buck converter and the buck-boost converter. These converters utilize alternative materials with high dielectric constants, such as calcium copper titanate (CCTO), to intentionally enhance specific parasitic components, notably inter capacitance. This innovative approach reduces reliance on external discrete capacitors and facilitates the development of highly reliable converters. The study also includes detailed discussions on the necessary design specifications for these parasitic capacitors. Furthermore, comprehensive finite element analysis solutions and detailed circuit models are provided. A design example is presented to demonstrate the practical application of the proposed concept in electric vehicle (EV) low voltage side dc-dc power converters used to supply EVs low voltage loads.Item Analysis of Latent Space Representations for Object Detection(2024-08) Dale, Ashley Susan; Christopher, Lauren; King, Brian; Salama, Paul; Rizkalla, MaherDeep Neural Networks (DNNs) successfully perform object detection tasks, and the Con- volutional Neural Network (CNN) backbone is a commonly used feature extractor before secondary tasks such as detection, classification, or segmentation. In a DNN model, the relationship between the features learned by the model from the training data and the features leveraged by the model during test and deployment has motivated the area of feature interpretability studies. The work presented here applies equally to white-box and black-box models and to any DNN architecture. The metrics developed do not require any information beyond the feature vector generated by the feature extraction backbone. These methods are therefore the first methods capable of estimating black-box model robustness in terms of latent space complexity and the first methods capable of examining feature representations in the latent space of black box models. This work contributes the following four novel methodologies and results. First, a method for quantifying the invariance and/or equivariance of a model using the training data shows that the representation of a feature in the model impacts model performance. Second, a method for quantifying an observed domain gap in a dataset using the latent feature vectors of an object detection model is paired with pixel-level augmentation techniques to close the gap between real and synthetic data. This results in an improvement in the model’s F1 score on a test set of outliers from 0.5 to 0.9. Third, a method for visualizing and quantifying similarities of the latent manifolds of two black-box models is used to correlate similar feature representation with increase success in the transferability of gradient-based attacks. Finally, a method for examining the global complexity of decision boundaries in black-box models is presented, where more complex decision boundaries are shown to correlate with increased model robustness to gradient-based and random attacks.Item Deep Learning of Biomechanical Dynamics With Spatial Variability Mining and Model Sparsifiation(2024-08) Liu, Ming; Zhang, Qingxue; King, Brian S.; Ben-Miled, Zina; Xia, YuniDeep learning of biomechanical dynamics is of great promise in smart health and data-driven precision medicine. Biomechanical dynamics are related to the movement patterns and gait characteristics of human people and may provide important insights if mined by deep learning models. However, efficient deep learning of biomechanical dynamics is still challenging, considering that there is a high diversity in the dynamics from different body locations, and the deep learning model may need to be lightweight enough to be able to be deployed in real-time. Targeting these challenges, we have firstly conducted studies on the spatial variability of biomechanical dynamics, aiming to evaluate and determine the optimal body location that is of great promise in robust physical activity type detection. Further, we have developed a framework for deep learning pruning, aiming to determine the optimal pruning schemes while maintaining acceptable performance. More specifically, the proposed approach first evaluates the layer importance of the deep learning model, and then leverages the probabilistic distribution-enabled threshold determination to optimize the pruning rate. The weighted random thresholding method is first investigated to further the understanding of the behavior of the pruning action for each layer. Afterwards, the Gaussian-based thresholding is designed to more effectively optimize the pruning strategies, which can find out the fine-grained pruning schemes with both emphasis and diversity regulation. Even further, we have enhanced and boosted the efficient deep learning framework, to co-optimize the accuracy and the continuity during the pruning process, with the latter metric – continuity meaning that the pruning locations in the weight matrices are encouraged to not cause too many noncontinuous non-pruned locations thereby achieving friendly model implementation. More specifically, the proposed framework leverages the significance scoring and the continuity scoring to quantize the characteristics of each of pruned convolutional filters, then leverages the clustering technique to group the pruned filters for each convolutional stage. Afterwards, the regularized ranking approach is designed to rank the pruned filters, through putting more emphasis on the continuity scores to encourage friendly implementation. In the end, a dual-thresholding strategy is leveraged to increase the diversity in this framework, during significance & continuity co-optimization. Experimental results have demonstrated promising findings, with enhanced understanding of the spatial variability of the biomechanical dynamics and best performance body location selection, with the effective deep learning model pruning framework that can reduce the model size significantly with performance maintained, and further, with the boosted framework that co-optimizes the accuracy and continuity to all consider the friendly implementation during the pruning process. Overall, this research will greatly advance the deep biomechanical mining towards efficient smart health.Item Geocasting-based Traffic Management Message Delivery Using C-V2X(2024-08) Mathew, Abin; Chen, Yaobin; Li, Feng; King, BrianCellular-Vehicle to Everything or C-V2X refers to vehicles connected to their surroundings using cellular based networks. With the rise of connected vehicles, C-V2X is emerging as one of the major standards for message transmission in automotive scenarios. The project aims to study the feasibility of C-V2X-based message transmission by building a prototype system, named RampCast, for transmitting traffic information from roadside message boards to vehicles. The RampCast framework would also implement geocasting-based algorithms to deliver messages to targeted vehicles. These algorithms focus on improving location-based message delivery using retransmission and prioritization strategies. The messages used for transmission are selected from the 511 web application built by INDOT, which contains the live traffic information for the state of Indiana which includes Travel Time information, Crash Alerts, Construction Alerts etc. The major objectives of this project consist of building the RampCast prototype, a system implementing C-V2X networks using a Software Defined Radio(SDR). The RampCast system implements a Publisher-subscriber messaging architecture with the primary actors being a Road Side Unit(RSU) and a Vehicle Onboard Unit(OBU). A data store containing traffic messages sourced from the 511 API is set up to be the input to the RampCast system. An end-to-end message transmission pipeline is built that would implement message transmission algorithms on the RSU and OBU side. Finally, the performance of message transmission on the RampCast system is evaluated using a metrics-capturing module. The system was evaluated on a test track in Columbus, Indiana. The performance metrics of the system were captured and analyzed, and the system met the key performance indicators for Latency, Packet Delivery Rate, and Packet Inter-reception Rate. The results indicate the satisfactory performance of the C-V2X standard for message transmission in the RampCast traffic guidance scenarios.Item Explainable AI Methods For Enhancing AI-Based Network Intrusion Detection Systems(2024-08) Arreche, Osvaldo Guilherme; King, Brian S.; Abdallah, Mustafa; El-Sharkawy, Mohamed A.In network security, the exponential growth of intrusions stimulates research toward developing advanced artificial intelligence (AI) techniques for intrusion detection systems (IDS). However, the reliance on AI for IDS presents challenges, including the performance variability of different AI models and the lack of explainability of their decisions, hindering the comprehension of outputs by human security analysts. Hence, this thesis proposes end-to-end explainable AI (XAI) frameworks tailored to enhance the understandability and performance of AI models in this context.The first chapter benchmarks seven black-box AI models across one real-world and two benchmark network intrusion datasets, laying the foundation for subsequent analyses. Subsequent chapters delve into feature selection methods, recognizing their crucial role in enhancing IDS performance by extracting the most significant features for identifying anomalies in network security. Leveraging XAI techniques, novel feature selection methods are proposed, showcasing superior performance compared to traditional approaches.Also, this thesis introduces an in-depth evaluation framework for black-box XAI-IDS, encompassing global and local scopes. Six evaluation metrics are analyzed, including descrip tive accuracy, sparsity, stability, efficiency, robustness, and completeness, providing insights into the limitations and strengths of current XAI methods.Finally, the thesis addresses the potential of ensemble learning techniques in improving AI-based network intrusion detection by proposing a two-level ensemble learning framework comprising base learners and ensemble methods trained on input datasets to generate evalua tion metrics and new datasets for subsequent analysis. Feature selection is integrated into both levels, leveraging XAI-based and Information Gain-based techniques.Holistically, this thesis offers a comprehensive approach to enhancing network intrusion detection through the synergy of AI, XAI, and ensemble learning techniques by providing open-source codes and insights into model performances. Therefore, it contributes to the security advancement of interpretable AI models for network security, empowering security analysts to make informed decisions in safeguarding networked systems.Item Trustworthy and Efficient Blockchain-based E-commerce Model(2024-08) Shankar Kumar, Valli Sanghami; Lee, John; King, Brian; Kim, Dongsoo; Hu, QinAmidst the rising popularity of digital marketplaces, addressing issues such as non- payment/non-delivery crimes, centralization risks, hacking threats, and the complexity of ownership transfers has become imperative. Many existing studies exploring blockchain technology in digital marketplaces and asset management merely touch upon various application scenarios without establishing a unified platform that ensures trustworthiness and efficiency across the product life cycle. In this thesis, we focus on designing a reliable and efficient e-commerce model to trade various assets. To enhance customer engagement through consensus, we utilize the XGBoost algorithm to identify loyal nodes from the platform entities pool. Alongside appointed nodes, these loyal nodes actively participate in the consensus process. The consensus algorithm guarantees that all involved nodes reach an agreement on the blockchain’s current state. We introduce a novel consensus mechanism named Modified- Practical Byzantine Fault Tolerance (M-PBFT), derived from the Practical Byzantine Fault Tolerance (PBFT) protocol to minimize communication overhead and improve overall efficiency. The modifications primarily target the leader election process and the communication protocols between leader and follower nodes within the PBFT consensus framework. In the domain of tangible assets, our primary objective is to elevate trust among various stakeholders and bolster the reputation of sellers. As a result, we aim to validate secondhand products and their descriptions provided by the sellers before the secondhand products are exchanged. This validation process also holds various entities accountable for their actions. We employ validators based on their location and qualifications to validate the products’ descriptions and generate validation certificates for the products, which are then securely recorded on the blockchain. To incentivize the participation of validator nodes and up- hold honest validation of product quality, we introduce an incentive mechanism leveraging Stackelberg game theory. On the other hand, for optimizing intangible assets management, we employ Non-Fungible Tokens (NFT) technology to tokenize these assets. This approach enhances traceability of ownership, transactions, and historical data, while also automating processes like dividend distributions, royalty payments, and ownership transfers through smart contracts. Initially, sellers mint NFTs and utilize the InterPlanetary File System (IPFS) to store the files related to NFTs, NFT metadata, or both since IPFS provides resilience and decentralized storage solutions to our network. The data stored in IPFS is encrypted for security purposes. Further, to aid sellers in pricing their NFTs efficiently, we employ the Stackelberg mechanism. Furthermore, to achieve finer access control in NFTs containing sensitive data and increase sellers’ profits, we propose a Popularity-based Adaptive NFT Management Scheme (PANMS) utilizing Reinforcement Learning (RL). To facilitate prompt and effective asset sales, we design a smart contract-powered auction mechanism. Also, to enhance data recording and event response efficiency, we introduce a weighted L-H index algorithm and transaction prioritization features in the network. The weighted L-H index algorithm determines efficient nodes to broadcast transactions. Transaction prioritization prioritizes certain transactions such as payments, verdicts during conflicts between sellers and validators, and validation reports to improve the efficiency of the platform. Simulation experiments are conducted to demonstrate the accuracy and efficiency of our proposed schemes.Item Enhanced Multiple Dense Layer EfficientNet(2024-08) Mohan, Aswathy; El-Sharkawy, Mohamed; King , Brian; Rizkalla, MaherIn the dynamic and ever-evolving landscape of Artificial Intelligence (AI), the domain of deep learning has emerged as a pivotal force, propelling advancements across a broad spectrum of applications, notably in the intricate field of image classification. Image classification, a critical task that involves categorizing images into predefined classes, serves as the backbone for numerous cutting-edge technologies, including but not limited to, automated surveillance, facial recognition systems, and advanced diagnostics in healthcare. Despite the significant strides made in the area, the quest for models that not only excel in accuracy but also demonstrate robust generalization across varied datasets, and maintain resilience against the pitfalls of overfitting, remains a formidable challenge. EfficientNetB0, a model celebrated for its optimized balance between computational efficiency and accuracy, stands at the forefront of solutions addressing these challenges. However, the nuanced complexities of datasets such as CIFAR-10, characterized by its diverse array of images spanning ten distinct categories, call for specialized adaptations to harness the full potential of such sophisticated architectures. In response, this thesis introduces an optimized version of the EffciientNetB0 architecture, meticulously enhanced with strategic architectural modifications, including the incorporation of an additional Dense layer endowed with 512 units and the strategic use of Dropout regularization. These adjustments are designed to amplify the model’s capacity for learning and interpreting complex patterns inherent in the data. Complimenting these architectural refinements, a nuanced two-phase training methodology is also adopted in the proposed model. This approach commences with the initial phase of training where the base model’s pre-trained weights are frozen, thus leveraging the power of transfer learning to secure a solid foundational understanding. The subsequent phase of fine-tuning, characterized by the selective unfreezing of layers, meticulously calibrates the model to the intricacies of the CIFAR-10 dataset. This is further bolstered by the implementation of adaptive learning rate adjustments, ensuring the model’s training process is both efficient and responsive to the nuances of the learning curve. Through a comprehensive suite of evaluations, encompassing accuracy assessments, confusion matrices, and detailed classification reports, the proposed model demonstrates notable improvement in performance. The insights gleaned from this research not only shed light on the mechanisms underpinning successful image classification models but also chart a course for future aimed at bridging the gap between theoretical model and their practical applications. This research encapsulates the iterative process of model enhancement, providing a beacon of future endeavors in the quest for optimal image classification solutions.Item Automated Evaluation of Neurological Disorders Through Electronic Health Record Analysis(2024-08) Prince, Md Rakibul Islam; Ben Miled, Zina; El-Sharkawy, Mohamed A.; Zhang, QingxueNeurological disorders present a considerable challenge due to their variety and diagnostic complexity especially for older adults. Early prediction of the onset and ongoing assessment of the severity of these disease conditions can allow timely interventions. Currently, most of the assessment tools are time-consuming, costly, and not suitable for use in primary care. To reduce this burden, the present thesis introduces passive digital markers for different disease conditions that can effectively automate the severity assessment and risk prediction from different modalities of electronic health records (EHR). The focus of the first phase of the present study in on developing passive digital markers for the functional assessment of patients suffering from Bipolar disorder and Schizophrenia. The second phase of the study explores different architectures for passive digital markers that can predict patients at risk for dementia. The functional severity PDM uses only a single EHR modality, namely medical notes in order to assess the severity of the functioning of schizophrenia, bipolar type I, or mixed bipolar patients. In this case, the input of is a single medical note from the electronic medical record of the patient. This note is submitted to a hierarchical BERT model which classifies at-risk patients. A hierarchical attention mechanism is adopted because medical notes can exceed the maximum allowed number of tokens by most language models including BERT. The functional severity PDM follows three steps. First, a sentence-level embedding is produced for each sentence in the note using a token-level attention mechanism. Second, an embedding for the entire note is constructed using a sentence-level attention mechanism. Third, the final embedding is classified using a feed-forward neural network which estimates the impairment level of the patient. When used prior to the onset of the disease, this PDM is able to differentiate between severe and moderate functioning levels with an AUC of 76%. Disease-specific severity assessment PDMs are only applicable after the onset of the disease and have AUCs of nearly 85% for schizophrenia and bipolar patients. The dementia risk prediction PDM considers multiple EHR modalities including socio-demographic data, diagnosis codes and medical notes. Moreover, the observation period and prediction horizon are varied for a better understanding of the practical limitations of the model. This PDM is able to identify patients at risk of dementia with AUCs ranging from 70% to 92% as the observation period approaches the index date. The present study introduces methodologies for the automation of important clinical outcomes such as the assessment of the general functioning of psychiatric patients and the prediction of risk for dementia using only routine care data.Item Towards No-Penalty Control Hazard Handling in RISC Architecture Microcontrollers(2024-08) Balasubramanian, Linknath Surya; Rizkalla, Maher E.; Lee, John J.; Ytterdal, Trond; Kumar, MukeshAchieving higher throughput is one of the most important requirements of a modern microcontroller. It is therefore not affordable for it to waste a considerable number of clock cycles in branch mispredictions. This paper proposes a hardware mechanism that makes microcontrollers forgo branch predictors, thereby removing branch mispredictions. The scope of this work is limited to low cost microcontroller cores that are applied in embedded systems. The proposed technique is implemented as five different modules which work together to forward required operands, resolve branches without prediction, and calculate the next instruction's address in the first stage of an in-order five stage pipelined micro-architecture. Since the address of successive instruction to a control transfer instruction is calculated in the first stage of pipeline, branch prediction is no longer necessary, thereby eliminating the clock cycle penalties occurred when using a branch predictor. The designed architecture was able to successfully calculate the address of next correct instruction and fetch it without any wastage of clock cycles except in cases where control transfer instructions are in true dependence with their immediate previous instructions. Further, we synthesized the proposed design with 7nm FinFET process and compared its latency with other designs to make sure that the microcontroller's operating frequency is not degraded by using this design. The critical path latency of instruction fetch stage integrated with the proposed architecture is 307 ps excluding the instruction cache access time.Item Design of Ultra-Low Power FinFET Charge Pumps for Energy Harvesting Systems(2024-08) Atluri, Mohan Krishna; Rizkalla, Maher E.; King, Brian S.; Christopher, Lauren A.This work introduces an ultra-low-voltage charge pump for energy harvesters in biosensors. The unique aspect of the proposed charge pump is its two-level design, where the first stage elevates the voltage to a specific level, and the output voltage of this stage becomes the input voltage of the second stage. Using two levels reduces the number of stages in a charge pump and improves efficiency to get a higher voltage gain. In our measurements, this charge pump design could convert a low 85mV input voltage to a substantial 608.2mV output voltage, approximately 7.15 times the input voltage, while maintaining a load resistance of 7MΩ and a 29.5% conversion efficiency.