- Browse by Author
Browsing by Author "Kilicarslan, Mehmet"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item DeepStep: Direct Detection of Walking Pedestrian From Motion by a Vehicle Camera(IEEE, 2022-06-28) Kilicarslan, Mehmet; Zheng, Jiang Yu; Computer and Information Science, School of SciencePedestrian detection has wide applications in intelligent transportation. It is essential to understand pedestrian’s position and action instantaneously for autonomous driving. Most algorithms divide these tasks into sequential procedures where pedestrians are detected from shape-based features in video frames, and their behaviors are analyzed with frame tracking. Different from those, this work introduces a deep learning-based pedestrian detection method that only uses motion cues. The pedestrian motion, which is much different from that of static background and dynamic vehicles, is investigated in the spatial-temporal domain. The pedestrian leg movement forms a chain-type trace in the motion profile images even if the ego-vehicle moves. Instead of modeling walking actions based on kinematics, the chain structure is directly learned from a large pedestrian dataset in driving videos. This method works for the more challenging scenes observed on moving vehicles than those scenes from static cameras. The aim is to detect not only pedestrians promptly but also predict their walking direction in the driving space. Since a video is reduced to temporal images, real-time performance is achieved with a high mean average precision and a low false-positive rate on a publicly available dataset.Item Direct Vehicle Collision Detection from Motion in Driving Video(IEEE, 2017-06) Kilicarslan, Mehmet; Zheng, Jiang Yu; Computer and Information Science, School of ScienceThe objective of this work is the instantaneous computation of Time-to-Collision (TTC) for potential collision only from motion information captured with a vehicle borne camera. The contribution is the detection of dangerous events and degree directly from motion divergence in the driving video, which is also a clue used by human drivers, without applying vehicle recognition and depth measuring in prior. Both horizontal and vertical motion divergence are analyzed simultaneously in several collision sensitive zones. Stable motion traces of linear feature components are obtained through filtering in the motion profiles. As a result, this avoids object recognition, and sophisticated depth sensing. The fine velocity computation yields reasonable TTC accuracy so that the video camera can achieve collision avoidance alone from size changes of visual patterns.Item Predict Vehicle Collision by TTC From Motion Using a Single Video Camera(IEEE, 2018-05) Kilicarslan, Mehmet; Zheng, Jiang Yu; Computer and Information Science, School of ScienceThe objective of this paper is the instantaneous computation of time-to-collision (TTC) for potential collision only from the motion information captured with a vehicle borne camera. The contribution is the detection of dangerous events and degree directly from motion divergence in the driving video, which is also a clue used by human drivers. Both horizontal and vertical motion divergence are analyzed simultaneously in several collision sensitive zones. The video data are condensed to the motion profiles both horizontally and vertically in the lower half of the video to show motion trajectories directly as edge traces. Stable motion traces of linear feature components are obtained through filtering in the motion profiles. As a result, this avoids object recognition and sophisticated depth sensing in prior. The fine velocity computation yields reasonable TTC accuracy so that a video camera can achieve collision avoidance alone from the size changes of visual patterns. We have tested the algorithm for various roads, environments, and traffic, and shown results by visualization in the motion profiles for overall evaluation.Item Semantic Segmentation of Road Profiles for Efficient Sensing in Autonomous Driving(IEEE, 2019-06) Cheng, Guo; Zheng, Jiang Yu; Kilicarslan, Mehmet; Computer and Information Science, School of ScienceIn vision-based autonomous driving, understanding spatial layout of road and traffic is required at each moment. This involves the detection of road, vehicle, pedestrian, etc. in images. In driving video, the spatial positions of various patterns are further tracked for their motion. This spatial-to-temporal approach inherently demands a large computational resource. In this work, however, we take a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. We sample one-pixel line at each frame in driving video, and the temporal congregation of lines from consecutive frames forms a road profile image. The temporal connection of lines also provides layout information of road and surrounding environment. This method reduces the processing data to a fraction of video in order to catch up vehicle moving speed. The key issue now is to know different regions in the road profile; the road profile is divided in real time to road, roadside, lane mark, vehicle, etc. as well as motion events such as stopping and turning of ego-vehicle. We show in this paper that the road profile can be learned through Semantic Segmentation. We use RGB-F images of the road profile to implement Semantic Segmentation to grasp both individual regions and their spatial relations on road effectively. We have tested our method on naturalistic driving video and the results are promising.Item Visualizing Road Appearance Properties in Driving Video(IEEE, 2016-07) Wang, Zheyuan; Zheng, Jiang Yu; Kilicarslan, Mehmet; Department of Computer & Information Science, School of ScienceWith the increasing videos taken from driving recorders on thousands of cars, it is a challenging task to retrieve these videos and search for important information. The goal of this work is to mine certain critical road properties in a large scale driving video data set for traffic accident analysis, sensing algorithm development, and testing benchmark. Our aim is to condense video data to compact road profiles, which contain visual features of the road environment. By visualizing road edge and lane marks in the feature space with the reduced dimension, we will further explore the road edge models influenced by road and off-road materials, weather, lighting condition, etc.