- Browse by Author
Browsing by Author "Christopher, Lauren"
Now showing 1 - 10 of 44
Results Per Page
Sort Options
Item A 2D PLUS DEPTH VIDEO CAMERA PROTOTYPE USING DEPTH FROM DEFOCUS IMAGING AND A SINGLE MICROFLUIDIC LENS(2011-08) Li, Weixu; Christopher, Lauren; Rizkalla, Maher E.; Salama, PaulA new method for capturing 3D video from a single imager and lens is introduced in this research. The benefit of this method is that it does not have the calibration and alignment issues associated with binocular 3D video cameras, and allows for a less expensive overall system. The digital imaging technique Depth from Defocus (DfD) has been successfully used in still camera imaging to develop a depth map associated with the image. However, DfD has not been applied in real-time video so far since the focus mechanisms are too slow to produce real-time results. This new research result shows that a Microfluidic lens is capable of the required focal length changes at 2x video frame rate, due to the electrostatic control of the focus. During the processing, two focus settings per output frame are captured using this lens combined with a broadcast video camera prototype. We show that the DfD technique using Bayesian Markov Random Field optimization can produce a valid depth map.Item 3D EM/MPM MEDICAL IMAGE SEGMENTATION USING AN FPGA EMBEDDED DESIGN IMPLEMENTATION(2011-08) Liu, Chao; Christopher, Lauren; Rizkalla, Maher E.; Salama, PaulThis thesis presents a Field Programmable Gate Array (FPGA) based embedded system which is used to achieve high speed segmentation of 3D images. Segmenta- tion is performed using Expectation-Maximization with Maximization of Posterior Marginals (EM/MPM) Bayesian algorithm. In this system, the embedded processor controls a custom circuit which performs the MPM and portions of the EM algorithm. The embedded processor completes the EM algorithm and also controls image data transmission between host computer and on-board memory. The whole system has been implemented on Xilinx Virtex 6 FPGA and achieved over 100 times improvement compared to standard desktop computing hardware.Item 3D ENDOSCOPY VIDEO GENERATED USING DEPTH INFERENCE: CONVERTING 2D TO 3D(2013-08-20) Rao, Swetcha; Christopher, Lauren; Rizkalla, Maher E.; Salama, Paul; King, BrianA novel algorithm was developed to convert raw 2-dimensional endoscope videos into 3-dimensional view. Minimally invasive surgeries aided with 3D view of the invivo site have shown to reduce errors and improve training time compared to those with 2D view. The novelty of this algorithm is that two cues in the images have been used to develop the 3D. Illumination is the rst cue used to nd the darkest regions in the endoscopy images in order to locate the vanishing point(s). The second cue is the presence of ridge-like structures in the in-vivo images of the endoscopy image sequence. Edge detection is used to map these ridge-like structures into concentric ellipses with their common center at the darkest spot. Then, these two observations are used to infer the depth of the endoscopy videos; which then serves to convert them from 2D to 3D. The processing time is between 21 seconds to 20 minutes for each frame, on a 2.27GHz CPU. The time depends on the number of edge pixels present in the edge-detection image. The accuracy of ellipse detection was measured to be 98.98% to 99.99%. The algorithm was tested on 3 truth images with known ellipse parameters and also on real bronchoscopy image sequences from two surgical procedures. Out of 1020 frames tested in total, 688 frames had single vanishing point while 332 frames had two vanishing points. Our algorithm detected the single vanishing point in 653 of the 688 frames and two vanishing points in 322 of the 332 frames.Item 3D Image Segmentation Implementation on FPGA Using EM/MPM Algorithm(2010-12) Sun, Yan; Christopher, Lauren; Rizkalla, Maher E.; Salama, PaulIn this thesis, 3D image segmentation is targeted to a Xilinx Field Programmable Gate Array (FPGA), and verified with extensive simulation. Segmentation is performed using the Expectation-Maximization with Maximization of the Posterior Marginals (EM/MPM) Bayesian algorithm. This algorithm segments the 3D image using neighboring pixels based on a Markov Random Field (MRF) model. This iterative algorithm is designed, synthesized and simulated for the Xilinx FPGA, and greater than 100 times speed improvement over standard desktop computer hardware is achieved. Three new techniques were the key to achieving this speed: Pipelined computational cores, sixteen parallel data paths and a novel memory interface for maximizing the external memory bandwidth. Seven MPM segmentation iterations are matched to the external memory bandwidth required of a single source file read, and a single segmented file write, plus a small amount of latency.Item 3D Object Detection Using Virtual Environment Assisted Deep Network Training(2020-12) Dale, Ashley S.; Christopher, Lauren; King, Brian; Salama, PaulAn RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety of configurations. When the MR-CNN architecture was initialized with MS COCO weights and the heads were trained with a mix of synthetic data and real world data, F1 scores improved in four of the five classes: The average maximum F1-score of all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91, compared to F1 = 0.89 for the networks trained exclusively with real data, and the standard deviation of the maximum mean F1-score for synthetically trained networks is σ∗ = 0.015, compared to σ_F1 = 0.020 for the networks trained exclusively with real F1 data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background.Item 3d terrain visualization and CPU parallelization of particle swarm optimization(2018) Wieczorek, Calvin L.; Christopher, Lauren; King, Brian; Lee, JohnParticle Swarm Optimization is a bio-inspired optimization technique used to approximately solve the non-deterministic polynomial (NP) problem of asset allocation in 3D space, frequency, antenna azimuth [1], and elevation orientation [1]. This research uses QT Data Visualization to display the PSO solutions, assets, transmitters in 3D space from the work done in [2]. Elevation and Imagery data was extracted from ARCGIS (a geographic information system (GIS) database) to add overlapping elevation and imagery data to that the 3D visualization displays proper topological data. The 3D environment range was improved and is now dynamic; giving the user appropriate coordinates based from the ARCGIS latitude and longitude ranges. The second part of the research improves the performance of the PSOs runtime, using OpenMP with CPU threading to parallelize the evaluation of the PSO by particle. Lastly, this implementation uses CPU multithreading with 4 threads to improve the performance of the PSO by 42% - 51% in comparison to running the PSO without CPU multithreading. The contributions provided allow for the PSO project to be more realistically simulate its use in the Electronic Warfare (EW) space, adding additional CPU multithreading implementation for further performance improvements.Item An Adaptive Eye Gaze Tracking System Without Calibration for Use in an Automobile(2011) Rajabather, Harikrishna K.; Koskie, Sarah; Chen, Yaobin; Christopher, LaurenOne of the biggest hurdles to the development of an effective driver state monitor is the that there is no real-time eye-gaze detection. This is primarily due to the fact that such systems require calibration. In this thesis the various aspects that comprise an eye gaze tracker are investigated. From that we developed an eye gaze tracker for automobiles that does not require calibration. We used a monocular camera system with IR light sources placed in each of the three mirrors. The camera system created the bright-pupil effect for robust pupil detection and tracking. We developed an SVM based algorithm for initial eye candidate detection; after that the eyes were tracked using a hybrid Kalman/Mean-shift algorithm. From the tracked pupils, various features such as the location of the glints (reflections in the pupil from the IR light sources) were extracted. This information is then fed into a Generalized Regression Neural Network (GRNN). The GRNN then maps this information into one of thirteen gaze regions in the vehicle.Item Analysis of Latent Space Representations for Object Detection(2024-08) Dale, Ashley Susan; Christopher, Lauren; King, Brian; Salama, Paul; Rizkalla, MaherDeep Neural Networks (DNNs) successfully perform object detection tasks, and the Con- volutional Neural Network (CNN) backbone is a commonly used feature extractor before secondary tasks such as detection, classification, or segmentation. In a DNN model, the relationship between the features learned by the model from the training data and the features leveraged by the model during test and deployment has motivated the area of feature interpretability studies. The work presented here applies equally to white-box and black-box models and to any DNN architecture. The metrics developed do not require any information beyond the feature vector generated by the feature extraction backbone. These methods are therefore the first methods capable of estimating black-box model robustness in terms of latent space complexity and the first methods capable of examining feature representations in the latent space of black box models. This work contributes the following four novel methodologies and results. First, a method for quantifying the invariance and/or equivariance of a model using the training data shows that the representation of a feature in the model impacts model performance. Second, a method for quantifying an observed domain gap in a dataset using the latent feature vectors of an object detection model is paired with pixel-level augmentation techniques to close the gap between real and synthetic data. This results in an improvement in the model’s F1 score on a test set of outliers from 0.5 to 0.9. Third, a method for visualizing and quantifying similarities of the latent manifolds of two black-box models is used to correlate similar feature representation with increase success in the transferability of gradient-based attacks. Finally, a method for examining the global complexity of decision boundaries in black-box models is presented, where more complex decision boundaries are shown to correlate with increased model robustness to gradient-based and random attacks.Item Asset Allocation with Swarm/Human Blended Intelligence(IEEE, 2016-10) Christopher, Lauren; Boler, William; Wieczorek, Calvin; Crespo, Jonah; Witcher, Paul; Hawkins, Scot A.; Stewart, James; Department of Electrical and Computer Engineering, School of Engineering and TechnologyPSO has been used to demonstrate the near-real-time optimization of frequency allocations and spatial positions for receiver assets in highly complex Electronic Warfare (EW) environments. The PSO algorithm computes optimal or near-optimal solutions so rapidly that multiple assets can be exploited in real-time and re-optimized on the fly as the situation changes. The allocation of assets in 3D space requires a blend of human intelligence and computational optimization. This paper advances the research on the tough problem of how humans interface to the swarm for directing the solution. The human intelligence places new pheromone-inspired spheres of influence to direct the final solution. The swarm can then react to the new input from the human intelligence. Our results indicate that this method can maintain the speed goal of less than 1 second, even with multiple spheres of pheromone influence in the solution space.Item Attached Learning Model for First Digital System Design Course in ECE Program(American Society for Engineering Education, 2016-06) Shayesteh, Seemein; Rizkalla, Maher E.; Christopher, Lauren; Miled, Zina Ben; Department of Electrical and Computer Engineering, School of Engineering and Technology