- Browse by Author
Browsing by Author "Zheng, Jiang Yu"
Now showing 1 - 10 of 26
Results Per Page
Sort Options
Item Adversarial autoencoders for anomalous event detection in images(2017) Dimokranitou, Asimenia; Tsechpenakis, Gavriil; Zheng, Jiang Yu; Tuceryan, MihranDetection of anomalous events in image sequences is a problem in computer vision with various applications, such as public security, health monitoring and intrusion detection. Despite the various applications, anomaly detection remains an ill-defined problem. Several definitions exist, the most commonly used defines an anomaly as a low probability event. Anomaly detection is a challenging problem mainly because of the lack of abnormal observations in the data. Thus, usually it is considered an unsupervised learning problem. Our approach is based on autoencoders in combination with Generative Adversarial Networks. The method is called Adversarial Autoencoders [1], and it is a probabilistic autoencoder, that attempts to match the aggregated posterior of the hidden code vector of the autoencoder, with an arbitrary prior distribution. The adversarial error of the learned autoencoder is low for regular events and high for irregular events. We compare our approach with state of the art methods and describe our results with respect to accuracy and efficiency.Item All weather road edge identification based on driving video mining(IEEE, 2017) Wang, Zheyuan; Cheng, Guo; Zheng, Jiang Yu; Computer and Information Science, School of ScienceTo avoid vehicle running off road, road edge detection is a fundamental function. Current work on road edge detection has not exhaustively tackled all weather and illumination conditions. We first sort the visual appearance of roads based on physical and optical properties under various illuminations. Then, data mining approach is applied to a large driving video set that contains the full spectrum of seasons and weathers to learn the statistical distribution of road edge appearances. The obtained parameters of road environment in color on road structure are used to classify weather in video briefly, and the corresponding algorithm and features are applied for robust road edge detection. To visualize the road appearance as well as evaluate the accuracy of detected road, a compact road profile image is generated to reduce the data to a small fraction of video. Through the exhaustive examination of all weather and illuminations, our road detection methods can locate road edges in good weather, reduce errors in dark illuminations, and report road invisibility in poor illuminations.Item Applications of Data Mining in Healthcare(2019-05) Peng, Bo; Mohler, George; Dundar, Murat; Zheng, Jiang YuWith increases in the quantity and quality of healthcare related data, data mining tools have the potential to improve people’s standard of living through personalized and predictive medicine. In this thesis we improve the state-of-the-art in data mining for several problems in the healthcare domain. In problems such as drug-drug interaction prediction and Alzheimer’s Disease (AD) biomarkers discovery and prioritization, current methods either require tedious feature engineering or have unsatisfactory performance. New effective computational tools are needed that can tackle these complex problems. In this dissertation, we develop new algorithms for two healthcare problems: high-order drug-drug interaction prediction and amyloid imaging biomarker prioritization in Alzheimer’s Disease. Drug-drug interactions (DDIs) and their associated adverse drug reactions (ADRs) represent a significant detriment to the public h ealth. Existing research on DDIs primarily focuses on pairwise DDI detection and prediction. Effective computational methods for high-order DDI prediction are desired. In this dissertation, I present a deep learning based model D 3 I for cardinality-invariant and order-invariant high-order DDI pre- diction. The proposed models achieve 0.740 F1 value and 0.847 AUC value on high-order DDI prediction, and outperform classical methods on order-2 DDI prediction. These results demonstrate the strong potential of D 3 I and deep learning based models in tackling the prediction problems of high-order DDIs and their induced ADRs. The second problem I consider in this thesis is amyloid imaging biomarkers discovery, for which I propose an innovative machine learning paradigm enabling precision medicine in this domain. The paradigm tailors the imaging biomarker discovery process to individual characteristics of a given patient. I implement this paradigm using a newly developed learning-to-rank method PLTR. The PLTR model seamlessly integrates two objectives for joint optimization: pushing up relevant biomarkers and ranking among relevant biomarkers. The empirical study of PLTR conducted on the ADNI data yields promising results to identify and prioritize individual-specific amyloid imaging biomarkers based on the individual’s structural MRI data. The resulting top ranked imaging biomarkers have the potential to aid personalized diagnosis and disease subtyping.Item Big-video mining of road appearances in full spectrums of weather and illuminations(IEEE, 2017-10) Cheng, Guo; Wang, Zheyuan; Zheng, Jiang Yu; Computer and Information Science, School of ScienceAutonomous and safety driving require the control of vehicles within roads. Compared to lane mark tracking, road edge detection is more difficult because of the large variation in road and off-road materials and the influence from weather and illuminations. This work investigates visual appearances of roads under a spectrum of weather conditions. We use big-data mining on large scale naturalistic driving videos taken over a year through four seasons. Large video volumes are condensed to compact road profile images for analysis. Clusters are extracted from all samples with unsupervised learning. Typical views of a spectrum of weather/illuminations are generated from the clusters. Further, by changing the number of clusters we find a stable number for clustering. The learned data are used to classify driving videos into typical illumination types briefly. The surveyed data can also be used in the development of road edge detection algorithm and system as well as their testing.Item Crime Detection from Pre-crime Video Analysis(2024-05) Kilic, Sedat; Tuceryan, Mihran; Zheng, Jiang Yu; Tsechpenakis, Gavriil; Durresi, ArjanThis research investigates the detection of pre-crime events, specifically targeting behaviors indicative of shoplifting, through the advanced analysis of CCTV video data. The study introduces an innovative approach that leverages augmented human pose and emotion information within individual frames, combined with the extraction of activity information across subsequent frames, to enhance the identification of potential shoplifting actions before they occur. Utilizing a diverse set of models including 3D Convolutional Neural Networks (CNNs), Graph Neural Networks (GNNs), Recurrent Neural Networks (RNNs), and a specially developed transformer architecture, the research systematically explores the impact of integrating additional contextual information into video analysis. By augmenting frame-level video data with detailed pose and emotion insights, and focusing on the temporal dynamics between frames, our methodology aims to capture the nuanced behavioral patterns that precede shoplifting events. The comprehensive experimental evaluation of our models across different configurations reveals a significant improvement in the accuracy of pre-crime detection. The findings underscore the crucial role of combining visual features with augmented data and the importance of analyzing activity patterns over time for a deeper understanding of pre-shoplifting behaviors. The study’s contributions are multifaceted, including a detailed examination of pre-crime frames, strategic augmentation of video data with added contextual information, the creation of a novel transformer architecture customized for pre-crime analysis, and an extensive evaluation of various computational models to improve predictive accuracy.Item DeepStep: Direct Detection of Walking Pedestrian From Motion by a Vehicle Camera(IEEE, 2022-06-28) Kilicarslan, Mehmet; Zheng, Jiang Yu; Computer and Information Science, School of SciencePedestrian detection has wide applications in intelligent transportation. It is essential to understand pedestrian’s position and action instantaneously for autonomous driving. Most algorithms divide these tasks into sequential procedures where pedestrians are detected from shape-based features in video frames, and their behaviors are analyzed with frame tracking. Different from those, this work introduces a deep learning-based pedestrian detection method that only uses motion cues. The pedestrian motion, which is much different from that of static background and dynamic vehicles, is investigated in the spatial-temporal domain. The pedestrian leg movement forms a chain-type trace in the motion profile images even if the ego-vehicle moves. Instead of modeling walking actions based on kinematics, the chain structure is directly learned from a large pedestrian dataset in driving videos. This method works for the more challenging scenes observed on moving vehicles than those scenes from static cameras. The aim is to detect not only pedestrians promptly but also predict their walking direction in the driving space. Since a video is reduced to temporal images, real-time performance is achieved with a high mean average precision and a low false-positive rate on a publicly available dataset.Item Detecting Vehicle Interactions in Driving Videos via Motion Profiles(IEEE, 2020-09) Wang, Zheyuan; Zheng, Jiang Yu; Gao, Zhen; Electrical and Computer Engineering, School of Engineering and TechnologyIdentifying interactions of vehicles on the road is important for accident analysis and driving behavior assessment. Our interactions include those with passing/passed, cut-in, crossing, frontal, on-coming, parallel driving vehicles, and ego-vehicle actions to change lane, stop, turn, and speeding. We use visual motion recorded in driving video taken by a dashboard camera to identify such interaction. Motion profiles from videos are filtered at critical positions, which reduces the complexity from object detection, depth sensing, target tracking, and motion estimation. The results are obtained efficiently, and the accuracy is also acceptable. The results can be used in driving video mining, traffic analysis, driver behavior understanding, etc.Item Digesting omni-video along routes for navigation(Office of the Vice Chancellor for Research, 2011-04-08) Cai, Hongyuan; Zheng, Jiang YuOmni-directional video records complete visual information along a route. Though replaying an omni-video presents reality, it requires significant amount of memory and communication bandwidth. This work extracts distinct views from an omni-video to form a visual digest named route sheet for navigation. We sort scenes at the motion and visibility level and investigate the similarity/redundancy of scenes in the context of a route. We use source data from 3D elevation map or omni-videos for the view selection. By condensing the flow in the video, our algorithm can generate distinct omni-view sequences with visual information as rich as the omni-video for further scene indexing and navigation with GIS data.Item Direct Vehicle Collision Detection from Motion in Driving Video(IEEE, 2017-06) Kilicarslan, Mehmet; Zheng, Jiang Yu; Computer and Information Science, School of ScienceThe objective of this work is the instantaneous computation of Time-to-Collision (TTC) for potential collision only from motion information captured with a vehicle borne camera. The contribution is the detection of dangerous events and degree directly from motion divergence in the driving video, which is also a clue used by human drivers, without applying vehicle recognition and depth measuring in prior. Both horizontal and vertical motion divergence are analyzed simultaneously in several collision sensitive zones. Stable motion traces of linear feature components are obtained through filtering in the motion profiles. As a result, this avoids object recognition, and sophisticated depth sensing. The fine velocity computation yields reasonable TTC accuracy so that the video camera can achieve collision avoidance alone from size changes of visual patterns.Item Enabling Real Time Instrumentation Using Reservoir Sampling and Binpacking(2023-05) Meruga, Sai Pavan Kumar; Hill, James H.; Durresi, Arjan; Zheng, Jiang YuThis thesis investigates the overhead added by reservoir sampling algorithm at different levels of granularity in real-time instrumentation of a distributed software systems. Firstly, this thesis not only discusses the inconsistencies found in the implementation of the reservoir sampling pintool in paper [ 1 ] but also provides the correct implementation. Secondly, this thesis provides the design and implementation of pintools for different level of granularities i.e., thread level, image level and routine level. Additionally, we provide quantitative comparison of performance for different sampling techniques (including reservoir sampling) at different levels of granularity. Based on the insights obtained from the empirical results, to enable real time instrumentation, we need to scale and manage the resources in the best way possible. To scale the reservoir sampling algorithm on a real time software system we integrate the traditional bin packing approach with the instrumentation in such a way that there is a decrease in the memory usage and improve the performance. The results of this research show that percentage difference between overhead added by Reservoir and Constant Sampling at a Image level granularity is 1.74%, at a Routine level granularity is 0.3% percent, at a Thread level granularity is 0.035%. Additionally, when we use bin packing technique along with reservoir sampling it normalizes the memory usage/performance runtime for Reservoir Sampling across multiple threads and different system visibility levels.
- «
- 1 (current)
- 2
- 3
- »