- Browse by Author
Browsing by Author "Zheng, J. Y."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Attribute-aware Semantic Segmentation of Road Scenes for Understanding Pedestrian Orientations(IEEE, 2018-11) Sulistiyo, M. D.; Kawanishi, Y.; Deguchi, D.; Hirayama, T.; Ide, I.; Zheng, J. Y.; Murase, H.; Computer and Information Science, School of ScienceSemantic segmentation is an interesting task for many deep learning researchers for scene understanding. However, recognizing details about objects' attributes can be more informative and also helpful for a better scene understanding in intelligent vehicle use cases. This paper introduces a method for simultaneous semantic segmentation and pedestrian attributes recognition. A modified dataset built on top of the Cityscapes dataset is created by adding attribute classes corresponding to pedestrian orientation attributes. The proposed method extends the SegNet model and is trained by using both the original and the attribute-enriched datasets. Based on an experiment, the proposed attribute-aware semantic segmentation approach shows the ability to slightly improve the performance on the Cityscapes dataset, which is capable of expanding its classes in this case through additional data training.Item Predicting Hazardous Driving Events Using Multi-Modal Deep Learning Based on Video Motion Profile and Kinematics Data(IEEE, 2018-11) Gao, Z.; Liu, Y.; Zheng, J. Y.; Yu, R.; Wang, X.; Sun, P.; Computer and Information Science, School of ScienceAs the raising of traffic accidents caused by commercial vehicle drivers, more regulations have been issued for improving their safety status. Driving record instruments are required to be installed on such vehicles in China. The obtained naturalistic driving data offer insight into the causal factors of hazardous events with the requirements to identify where hazardous events happen within large volumes of data. In this study, we develop a model based on a low-definition driving record instrument and the vehicle kinematic data for post-accident analysis by multi-modal deep learning method. With a higher camera position on commercial vehicles than cars that can observe further distance, motion profiles are extracted from driving video to capture the trajectory features of front vehicles at different depths. Then random forest is used to select significant kinematic variables which can reflect the potential crash. Finally, a multi-modal deep convolutional neural network (DCNN) combined both video and kinematic data is developed to identify potential collision risk in each 12-second vehicle trip. The analysis results indicate that the proposed multi-modal deep learning model can identify hazardous events within a large volumes of data at an AUC of 0.81, which outperforms the state-of-the-art random forest model and kinematic threshold method.