- Browse by Author
Browsing by Author "Christopher, Lauren A."
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item 3-D Scene Reconstruction for Passive Ranging Using Depth from Defocus and Deep Learning(2019-08) Emerson, David R.; Christopher, Lauren A.; Ben Miled, Zina; King, Brian; Salama, PaulDepth estimation is increasingly becoming more important in computer vision. The requirement for autonomous systems to gauge their surroundings is of the utmost importance in order to avoid obstacles, preventing damage to itself and/or other systems or people. Depth measuring/estimation systems that use multiple cameras from multiple views can be expensive and extremely complex. And as these autonomous systems decrease in size and available power, the supporting sensors required to estimate depth must also shrink in size and power consumption. This research will concentrate on a single passive method known as Depth from Defocus (DfD), which uses an in-focus and out-of-focus image to infer the depth of objects in a scene. The major contribution of this research is the introduction of a new Deep Learning (DL) architecture to process the the in-focus and out-of-focus images to produce a depth map for the scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cuts algorithms applied to the synthetically blurred dataset the DfD-Net produced a 34.30% improvement in the average Normalized Root Mean Square Error (NRMSE). Similarly the DfD-Net architecture produced a 76.69% improvement in the average Normalized Mean Absolute Error (NMAE). Only the Structural Similarity Index (SSIM) had a small average decrease of 2.68% when compared to the graph cuts algorithm. This slight reduction in the SSIM value is a result of the SSIM metric penalizing images that appear to be noisy. In some instances the DfD-Net output is mottled, which is interpreted as noise by the SSIM metric. This research introduces two methods of deep learning architecture optimization. The first method employs the use of a variant of the Particle Swarm Optimization (PSO) algorithm to improve the performance of the DfD-Net architecture. The PSO algorithm was able to find a combination of the number of convolutional filters, the size of the filters, the activation layers used, the use of a batch normalization layer between filters and the size of the input image used during training to produce a network architecture that resulted in an average NRMSE that was approximately 6.25% better than the baseline DfD-Net average NRMSE. This optimized architecture also resulted in an average NMAE that was 5.25% better than the baseline DfD-Net average NMAE. Only the SSIM metric did not see a gain in performance, dropping by 0.26% when compared to the baseline DfD-Net average SSIM value. The second method illustrates the use of a Self Organizing Map clustering method to reduce the number convolutional filters in the DfD-Net to reduce the overall run time of the architecture while still retaining the network performance exhibited prior to the reduction. This method produces a reduced DfD-Net architecture that has a run time decrease of between 14.91% and 44.85% depending on the hardware architecture that is running the network. The final reduced DfD-Net resulted in a network architecture that had an overall decrease in the average NRMSE value of approximately 3.4% when compared to the baseline, unaltered DfD-Net, mean NRMSE value. The NMAE and the SSIM results for the reduced architecture were 0.65% and 0.13% below the baseline results respectively. This illustrates that reducing the network architecture complexity does not necessarily reduce the reduction in performance. Finally, this research introduced a new, real world dataset that was captured using a camera and a voltage controlled microfluidic lens to capture the visual data and a 2-D scanning LIDAR to capture the ground truth data. The visual data consists of images captured at seven different exposure times and 17 discrete voltage steps per exposure time. The objects in this dataset were divided into four repeating scene patterns in which the same surfaces were used. These scenes were located between 1.5 and 2.5 meters from the camera and LIDAR. This was done so any of the deep learning algorithms tested would see the same texture at multiple depths and multiple blurs. The DfD-Net architecture was employed in two separate tests using the real world dataset. The first test was the synthetic blurring of the real world dataset and assessing the performance of the DfD-Net trained on the Middlebury dataset. The results of the real world dataset for the scenes that were between 1.5 and 2.2 meters from the camera the DfD-Net trained on the Middlebury dataset produced an average NRMSE, NMAE and SSIM value that exceeded the test results of the DfD-Net tested on the Middlebury test set. The second test conducted was the training and testing solely on the real world dataset. Analysis of the camera and lens behavior led to an optimal lens voltage step configuration of 141 and 129. Using this configuration, training the DfD-Net resulted in an average NRMSE, NMAE and SSIM of 0.0660, 0.0517 and 0.8028 with a standard deviation of 0.0173, 0.0186 and 0.0641 respectively.Item Design of Ultra-Low Power FinFET Charge Pumps for Energy Harvesting Systems(2024-08) Atluri, Mohan Krishna; Rizkalla, Maher E.; King, Brian S.; Christopher, Lauren A.This work introduces an ultra-low-voltage charge pump for energy harvesters in biosensors. The unique aspect of the proposed charge pump is its two-level design, where the first stage elevates the voltage to a specific level, and the output voltage of this stage becomes the input voltage of the second stage. Using two levels reduces the number of stages in a charge pump and improves efficiency to get a higher voltage gain. In our measurements, this charge pump design could convert a low 85mV input voltage to a substantial 608.2mV output voltage, approximately 7.15 times the input voltage, while maintaining a load resistance of 7MΩ and a 29.5% conversion efficiency.Item Detection of Stroke, Blood Vessel Landmarks, and Leptomeningeal Anastomoses in Mouse Brain Imaging(2022-12) Zhang, Leqi; Christopher, Lauren A.; King, Brian; Salama, PaulCollateral connections in the brain, also known as Leptomeningeal Anastomoses, are connections between blood vessels originating from different arteries. Despite limited knowledge, they are suggested as an important contributor to cerebral stroke recovery that allows additional blood flow through the affected area. However, few databases and algorithms exist for this specific task of locating them. In this paper, a MATLAB program is developed to find these connections and detect strokes to replace manual labeling by professionals. The limited data available for this study are 23 2D microscopy images of mice cerebral vascular structures highlighted by dyes. In the images, strokes are shown to diminish the pixel count of vessels below 80\% compared to the healthy brain. Stroke classification error is greatly reduced by narrowing the scope from comparing the entire hemisphere to one smaller region. A novel way of finding collateral connections is utilizing connected components. Connected components organize all adjacent pixels into a group. All collateral connections can be found on the border of two neighboring arterial flow regions, and belong to the same group of connected components with the arterial source from each side. Along with finding collateral connections, a newly created coordinate system allows regions to be defined relative to the brain landmarks, based on the brain's center, orientation, and scale. The method newly proposed in this paper combines stroke detection, brain coordinate system extraction, and collateral connection detection in stroke-affected mouse brains using only image processing techniques. This allows a simpler, more explainable result on limited data than other techniques such as supervised machine learning. In addition, the new method does not require ground truth and high image count for training. This automated process was successfully interpreted by medical experts, which allows for further research into automating collateral connection detection in 3D.Item Dynamic electronic asset allocation comparing genetic algorithm with particle swarm optimization(2018-12) Islam, Md Saiful; Christopher, Lauren A.; King, Brian S.; El-Sharkawy, MohamedThe contribution of this research work can be divided into two main tasks: 1) implementing this Electronic Warfare Asset Allocation Problem (EWAAP) with the Genetic Algorithm (GA); 2) Comparing performance of Genetic Algorithm to Particle Swarm Optimization (PSO) algorithm. This research problem implemented Genetic Algorithm in C++ and used QT Data Visualization for displaying three-dimensional space, pheromone, and Terrain. The Genetic algorithm implementation maintained and preserved the coding style, data structure, and visualization from the PSO implementation. Although the Genetic Algorithm has higher fitness values and better global solutions for 3 or more receivers, it increases the running time. The Genetic Algorithm is around (15-30\%) more accurate for asset counts from 3 to 6 but requires (26-82\%) more computational time. When the allocation problem complexity increases by adding 3D space, pheromones and complex terrains, the accuracy of GA is 3.71\% better but the speed of GA is 121\% slower than PSO. In summary, the Genetic Algorithm gives a better global solution in some cases but the computational time is higher for the Genetic Algorithm with than Particle Swarm Optimization.Item Vehicle-pedestrian interaction using naturalistic driving video through tractography of relative positions and pedestrian pose estimation(2017-04-11) Mueid, Rifat M.; Christopher, Lauren A.Research on robust Pre-Collision Systems (PCS) requires new techniques that will allow a better understanding of the vehicle-pedestrian dynamic relationship, and which can predict pedestrian future movements. Our research analyzed videos from the Transportation Active Safety Institute (TASI) 110-Car naturalistic driving dataset to extract two dynamic pedestrian semantic features. The dataset consists of videos recorded with forward facing cameras from 110 cars over a year in all weather and illumination conditions. This research focuses on the potential-conflict situations where a collision may happen if no avoidance action is taken from driver or pedestrian. We have used 1000 such 15 seconds videos to find vehicle-pedestrian relative dynamic trajectories and pose of pedestrians. Adaptive structural local appearance model and particle filter methods have been implemented and modified to track the pedestrians more accurately. We have developed new algorithm to compute Focus of Expansion (FoE) automatically. Automatically detected FoE height data have a correlation of 0.98 with the carefully clicked human data. We have obtained correct tractography results for over 82% of the videos. For pose estimation, we have used flexible mixture model for capturing co-occurrence between pedestrian body segments. Based on existing single-frame human pose estimation model, we have introduced Kalman filtering and temporal movement reduction techniques to make stable stick-figure videos of the pedestrian dynamic motion. We were able to reduce frame to frame pixel offset by 86% compared to the single frame method. These tractographs and pose estimation data were used as features to train a neural network for classifying ‘potential conflict’ and ‘no potential conflict’ situations. The training of the network achieved 91.2% true label accuracy, and 8.8% false level accuracy. Finally, the trained network was used to assess the probability of collision over time for the 15 seconds videos which generates a spike when there is a ‘potential conflict’ situation. We have also tested our method with TASI mannequin crash data. With the crash data we were able to get a danger spike for 70% of the videos. The research enables new analysis on potential-conflict pedestrian cases with 2D tractography data and stick-figure pose representation of pedestrians, which provides significant insight on the vehicle-pedestrian dynamics that are critical for safe autonomous driving and transportation safety innovations.