- Browse by Subject
Browsing by Subject "autonomous driving"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Object Detection from a Vehicle Using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm(SAE, 2017-03) Dheekonda, Raja Sekhar Rao; Panda, Sampad K.; Khan, Nazmuzzaman; Al-Hasan, Mohammad; Anwar, Sohel; Mechanical Engineering, School of Engineering and TechnologyAccuracy in detecting a moving object is critical to autonomous driving or advanced driver assistance systems (ADAS). By including the object classification from multiple sensor detections, the model of the object or environment can be identified more accurately. The critical parameters involved in improving the accuracy are the size and the speed of the moving object. All sensor data are to be used in defining a composite object representation so that it could be used for the class information in the core object’s description. This composite data can then be used by a deep learning network for complete perception fusion in order to solve the detection and tracking of moving objects problem. Camera image data from subsequent frames along the time axis in conjunction with the speed and size of the object will further contribute in developing better recognition algorithms. In this paper, we present preliminary results using only camera images for detecting various objects using deep learning network, as a first step toward multi-sensor fusion algorithm development. The simulation experiments based on camera images show encouraging results where the proposed deep learning network based detection algorithm was able to detect various objects with certain degree of confidence. A laboratory experimental setup is being commissioned where three different types of sensors, a digital camera with 8 megapixel resolution, a LIDAR with 40m range, and ultrasonic distance transducer sensors will be used for multi-sensor fusion to identify the object in real-time.Item Semantic Segmentation of Road Profiles for Efficient Sensing in Autonomous Driving(IEEE, 2019-06) Cheng, Guo; Zheng, Jiang Yu; Kilicarslan, Mehmet; Computer and Information Science, School of ScienceIn vision-based autonomous driving, understanding spatial layout of road and traffic is required at each moment. This involves the detection of road, vehicle, pedestrian, etc. in images. In driving video, the spatial positions of various patterns are further tracked for their motion. This spatial-to-temporal approach inherently demands a large computational resource. In this work, however, we take a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. We sample one-pixel line at each frame in driving video, and the temporal congregation of lines from consecutive frames forms a road profile image. The temporal connection of lines also provides layout information of road and surrounding environment. This method reduces the processing data to a fraction of video in order to catch up vehicle moving speed. The key issue now is to know different regions in the road profile; the road profile is divided in real time to road, roadside, lane mark, vehicle, etc. as well as motion events such as stopping and turning of ego-vehicle. We show in this paper that the road profile can be learned through Semantic Segmentation. We use RGB-F images of the road profile to implement Semantic Segmentation to grasp both individual regions and their spatial relations on road effectively. We have tested our method on naturalistic driving video and the results are promising.Item Sequential Semantic Segmentation of Road Profiles for Path and Speed Planning(IEEE, 2022-12) Cheng, Guo; Yu Zheng, Jiang; Computer and Information Science, School of ScienceDriving video is available from in-car camera for road detection and collision avoidance. However, consecutive video frames in a large volume have redundant scene coverage during vehicle motion, which hampers real-time perception in autonomous driving. This work utilizes compact road profiles (RP) and motion profiles (MP) to identify path regions and dynamic objects, which drastically reduces video data to a lower dimension and increases sensing rate. To avoid collision in a close range and navigate a vehicle in middle and far ranges, several RP/MPs are scanned continuously from different depths for vehicle path planning. We train deep network to implement semantic segmentation of RP in the spatial-temporal domain, in which we further propose a temporally shifting memory for online testing. It sequentially segments every incoming line without latency by referring to a temporal window. In streaming-mode, our method generates real-time output of road, roadsides, vehicles, pedestrians, etc. at discrete depths for path planning and speed control. We have experimented our method on naturalistic driving videos under various weather and illumination conditions. It reached the highest efficiency with the least amount of data.