- Browse by Author
Browsing by Author "Wang, Zheyuan"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item All weather road edge identification based on driving video mining(IEEE, 2017) Wang, Zheyuan; Cheng, Guo; Zheng, Jiang Yu; Computer and Information Science, School of ScienceTo avoid vehicle running off road, road edge detection is a fundamental function. Current work on road edge detection has not exhaustively tackled all weather and illumination conditions. We first sort the visual appearance of roads based on physical and optical properties under various illuminations. Then, data mining approach is applied to a large driving video set that contains the full spectrum of seasons and weathers to learn the statistical distribution of road edge appearances. The obtained parameters of road environment in color on road structure are used to classify weather in video briefly, and the corresponding algorithm and features are applied for robust road edge detection. To visualize the road appearance as well as evaluate the accuracy of detected road, a compact road profile image is generated to reduce the data to a small fraction of video. Through the exhaustive examination of all weather and illuminations, our road detection methods can locate road edges in good weather, reduce errors in dark illuminations, and report road invisibility in poor illuminations.Item Big-video mining of road appearances in full spectrums of weather and illuminations(IEEE, 2017-10) Cheng, Guo; Wang, Zheyuan; Zheng, Jiang Yu; Computer and Information Science, School of ScienceAutonomous and safety driving require the control of vehicles within roads. Compared to lane mark tracking, road edge detection is more difficult because of the large variation in road and off-road materials and the influence from weather and illuminations. This work investigates visual appearances of roads under a spectrum of weather conditions. We use big-data mining on large scale naturalistic driving videos taken over a year through four seasons. Large video volumes are condensed to compact road profile images for analysis. Clusters are extracted from all samples with unsupervised learning. Typical views of a spectrum of weather/illuminations are generated from the clusters. Further, by changing the number of clusters we find a stable number for clustering. The learned data are used to classify driving videos into typical illumination types briefly. The surveyed data can also be used in the development of road edge detection algorithm and system as well as their testing.Item Detecting Vehicle Interactions in Driving Videos via Motion Profiles(IEEE, 2020-09) Wang, Zheyuan; Zheng, Jiang Yu; Gao, Zhen; Electrical and Computer Engineering, School of Engineering and TechnologyIdentifying interactions of vehicles on the road is important for accident analysis and driving behavior assessment. Our interactions include those with passing/passed, cut-in, crossing, frontal, on-coming, parallel driving vehicles, and ego-vehicle actions to change lane, stop, turn, and speeding. We use visual motion recorded in driving video taken by a dashboard camera to identify such interaction. Motion profiles from videos are filtered at critical positions, which reduces the complexity from object detection, depth sensing, target tracking, and motion estimation. The results are obtained efficiently, and the accuracy is also acceptable. The results can be used in driving video mining, traffic analysis, driver behavior understanding, etc.Item Planning Autonomous Driving with Compact Road Profiles(IEEE Xplore, 2021-09) Wang, Zheyuan; Cheng, Guo; Zheng, Jiang Yu; Computer and Information Science, School of ScienceCurrent sensing and control of self-driving vehicles based on full-view recognition is hard to keep a high-frequency with a fast moving vehicle, as increasingly complex computation is employed to cope with the variations of driving environment. This work, however, explores a light-weight sensing-planning framework for autonomous driving. Taking the advantage that a vehicle moves along a smooth path, we only locate several sampling lines in the view to scan the road, vehicles and environments continuously, which generates a fraction of full video data. We have applied semantic segmentation to the streaming road profiles without redundant data computing. In this paper, we plan vehicle path/motion based on this minimum data set that contains essential information for driving. Based on the lane, headway length, and vehicle motion detected from road/motion profiles, a path and speed of ego-vehicle as well as the interaction with surrounding vehicles are computed. This sensing-planning scheme based on spatially sparse yet temporally dense data can ensure a fast response to events, which yields smooth driving in busy traffic flow.Item Visual Counting of Traffic Flow from a Car via Vehicle Detection and Motion Analysis(Springer, 2020) Kolcheck, Kevin; Wang, Zheyuan; Xu, Haiyan; Zheng, Jiang Yu; Computer and Information Science, School of ScienceVisual traffic counting so far has been carried out by static cameras at streets or aerial pictures from sky. This work initiates a new approach to count traffic flow by using populated vehicle driving recorders. Mainly vehicles are counted by a camera moves along a route on opposite lane. Vehicle detection is first implemented in video frames by using deep learning YOLO3, and then vehicle trajectories are counted in the spatial-temporal space called motion profile. Motion continuity, direction, and detection missing are considered to avoid multiple counting of oncoming vehicles. This method has been tested on naturalistic driving videos lasting for hours. The counted vehicle numbers can be interpolated as a flow of opposite lanes from a patrol vehicle for traffic control. The mobile counting of traffic is more flexible than the traffic monitoring by cameras at street corners.Item Visualizing Road Appearance Properties in Driving Video(IEEE, 2016-07) Wang, Zheyuan; Zheng, Jiang Yu; Kilicarslan, Mehmet; Department of Computer & Information Science, School of ScienceWith the increasing videos taken from driving recorders on thousands of cars, it is a challenging task to retrieve these videos and search for important information. The goal of this work is to mine certain critical road properties in a large scale driving video data set for traffic accident analysis, sensing algorithm development, and testing benchmark. Our aim is to condense video data to compact road profiles, which contain visual features of the road environment. By visualizing road edge and lane marks in the feature space with the reduced dimension, we will further explore the road edge models influenced by road and off-road materials, weather, lighting condition, etc.