- Browse by Author
Browsing by Author "Christopher, Lauren Ann"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Asset allocation in frequency and in 3 spatial dimensions for electronic warfare application(2016-04) Crespo, Jonah Greenfield; Christopher, Lauren Ann; Dos Santos, Euzeli Cipriano, Jr.; Rizkalla, Maher; Li, Lingxi; King, BrianThis paper describes two research areas applied to Particle Swarm Optimization (PSO) in an electronic warfare asset scenario. First, a three spatial dimension solution utilizing topographical data is implemented and tested against a two dimensional solution. A three dimensional (3D) optimization increases solution space for optimization of asset location. Topography from NASA's Digital Elevation Model is also added to the solution to provide a realistic scenario. The optimization is tested for run time, average distances between receivers, average distance between receivers and paired transmitters, and transmission power. Due to load times of maps and increased iterations, the average run times were increased from 123ms to 178ms, which remains below the 1 second target for convergence speeds. The spread distance between receivers was able to increase from 86km to 89km. The distance between receiver and its paired transmitters as well as the total received power did not change signi cannily. In the second research contribution, a user input is created and placed into an unconstrained 2D active swarm. This \human in the swarm" scenario allows a user to change keep-away boundaries during optimization. The blended human and swarm solution successfully implemented human input into a running optimization with a time delay. The results of this research show that a electronic warfare solutions with real 3D topography can be simulated with minimal computational costs over two dimensional solutions and that electronic warfare solutions can successfully optimize using human input data.Item Attention Mechanism Improves YOLOv5x for Detecting Vehicles on Surveillance Videos(IEEE, 2022-10) Qui, Mei; Christopher, Lauren Ann; Chein, Stanley; Chen, Yaobin; Electrical and Computer Engineering, School of Engineering and TechnologyVehicle detection accuracy on surveillance videos is heavily restricted by camera angles, low lighting conditions, low visibility caused by harsh weather, and serious occlusions. For the full 24/7 operation, the Intelligent Transportation Services (ITS) are expected to perform well on all the categories of the target detections in the environment. Unfortunately, most existing datasets do not cover all these difficult conditions. Moreover, the state-of-the-art Deep Learning detector performance decreases for these difficult conditions. This paper reports on the training of an object detection system using a range of traffic scenarios: sunny, rainy, snowy, one-side road, two-side road, complex road structures with occlusions, heavy traffic with congestion, light traffic, and reduced traffic at night. The state-of-the-art object detector of YOLOv5x is used for vehicle detection and is fine-tuned on this new diverse dataset through transfer learning. Transfer learning freezes the backbone network while training the remaining fully connected network. To further improve the detection performance, we added two convolutional block attention modules (CBAM) to the neck as our proposed system: 2xCBAM-YOLOv5. Several experiments refined the number of CBAMs and the placement of these modules to optimize performance. Doing transfer learning alone, the mean Average Precision(mAP) on the test data improves from 75.9% to 78.9%. After transfer learning, ablations were done on YOLOv5x combined with the new CBAMs. The resulting mAP reaches 85.0%, while precision improves from 82.3% to 88.2%, recall improves from 72.3% to 80.4% and F1-score improves from 0.77 to 0.841 compared with transfer learning alone. This new architecture provides an overall improvement for ITS traffic surveillance applications.Item Parallelized Ray Casting Volume Rendering and 3D Segmentation with Combinatorial Map(2016-04-27) Huang, Wenhan; Salama, Paul; Rizkalla, Maher; Christopher, Lauren Ann; Dunn, Kenneth W.; King, BrianRapid development of digital technology has enabled the real-time volume rendering of scientific data, in particular large microscopy data sets. In general, volume rendering techniques project 3D discrete datasets onto 2D image planes, with the generated views being transparent and having designated color that is not necessarily "real" color. Volume rendering techniques initially require designating a processing method that assigns different colors and transparency coefficients to different regions. Then based on the "viewer" and the dataset "location," the method will determine the final imaging effect. Current popular techniques include ray casting, splatting, shear warp, and texture-based volume rendering. Of particular interest is ray casting as it permits the display of objects interior to a dataset as well as render complex objects such as skeleton and muscle. However, ray casting requires large memory and suffers from longer processing time. One way to address this is to parallelize its implementation on programmable graphic processing hardware. This thesis proposes a GPU based ray casting algorithm that can render a 3D volume in real-time application. In addition, to implementing volume rendering techniques on programmable graphic processing hardware to decrease execution times, 3D image segmentation techniques can also be utilized to increase execution speeds. In 3D image segmentation, the dataset is partitioned into smaller sized regions based on specific properties. By using a 3D segmentation method in volume rendering applications, users can extract individual objects from within the 3D dataset for rendering and further analysis. This thesis proposes a 3D segmentation algorithm with combinatorial map that can be parallelized on graphic processing units.Item Silent speech recognition in EEG-based brain computer interface(2015) Ghane, Parisa; Li, Lingxi; Tovar, Andres; Christopher, Lauren Ann; King, BrianA Brain Computer Interface (BCI) is a hardware and software system that establishes direct communication between human brain and the environment. In a BCI system, brain messages pass through wires and external computers instead of the normal pathway of nerves and muscles. General work ow in all BCIs is to measure brain activities, process and then convert them into an output readable for a computer. The measurement of electrical activities in different parts of the brain is called electroencephalography (EEG). There are lots of sensor technologies with different number of electrodes to record brain activities along the scalp. Each of these electrodes captures a weighted sum of activities of all neurons in the area around that electrode. In order to establish a BCI system, it is needed to set a bunch of electrodes on scalp, and a tool to send the signals to a computer for training a system that can find the important information, extract them from the raw signal, and use them to recognize the user's intention. After all, a control signal should be generated based on the application. This thesis describes the step by step training and testing a BCI system that can be used for a person who has lost speaking skills through an accident or surgery, but still has healthy brain tissues. The goal is to establish an algorithm, which recognizes different vowels from EEG signals. It considers a bandpass filter to remove signals' noise and artifacts, periodogram for feature extraction, and Support Vector Machine (SVM) for classification.