- Browse by Author
Browsing by Author "Cheng, Guo"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item All weather road edge identification based on driving video mining(IEEE, 2017) Wang, Zheyuan; Cheng, Guo; Zheng, Jiang Yu; Computer and Information Science, School of ScienceTo avoid vehicle running off road, road edge detection is a fundamental function. Current work on road edge detection has not exhaustively tackled all weather and illumination conditions. We first sort the visual appearance of roads based on physical and optical properties under various illuminations. Then, data mining approach is applied to a large driving video set that contains the full spectrum of seasons and weathers to learn the statistical distribution of road edge appearances. The obtained parameters of road environment in color on road structure are used to classify weather in video briefly, and the corresponding algorithm and features are applied for robust road edge detection. To visualize the road appearance as well as evaluate the accuracy of detected road, a compact road profile image is generated to reduce the data to a small fraction of video. Through the exhaustive examination of all weather and illuminations, our road detection methods can locate road edges in good weather, reduce errors in dark illuminations, and report road invisibility in poor illuminations.Item Big-video mining of road appearances in full spectrums of weather and illuminations(IEEE, 2017-10) Cheng, Guo; Wang, Zheyuan; Zheng, Jiang Yu; Computer and Information Science, School of ScienceAutonomous and safety driving require the control of vehicles within roads. Compared to lane mark tracking, road edge detection is more difficult because of the large variation in road and off-road materials and the influence from weather and illuminations. This work investigates visual appearances of roads under a spectrum of weather conditions. We use big-data mining on large scale naturalistic driving videos taken over a year through four seasons. Large video volumes are condensed to compact road profile images for analysis. Clusters are extracted from all samples with unsupervised learning. Typical views of a spectrum of weather/illuminations are generated from the clusters. Further, by changing the number of clusters we find a stable number for clustering. The learned data are used to classify driving videos into typical illumination types briefly. The surveyed data can also be used in the development of road edge detection algorithm and system as well as their testing.Item Body mass index is negatively associated with telomere length: a collaborative cross-sectional meta-analysis of 87 observational studies(Oxford University Press, 2018-09) Gielen, Marij; Hageman, Geja J.; Antoniou, Evangelia E.; Nordfjall, Katarina; Mangino, Massimo; Balasubramanyam, Muthuswamy; de Meyer, Tim; Hendricks, Audrey E.; Giltay, Erik J.; Hunt, Steven C.; Nettleton, Jennifer A.; Salpea, Klelia D.; Diaz, Vanessa A.; Farzaneh-Far, Ramin; Atzmon, Gil; Harris, Sarah E.; Hou, Lifang; Gilley, David; Hovatta, Iiris; Kark, Jeremy D.; Nassar, Hisham; Kurz, David J.; Mather, Karen A.; Willeit, Peter; Zheng, Yun-Ling; Pavanello, Sofia; Demerath, Ellen W.; Rode, Line; Bunout, Daniel; Steptoe, Andrew; Boardman, Lisa; Marti, Amelia; Needham, Belinda; Zheng, Wei; Ramsey-Goldman, Rosalind; Pellatt, Andrew J.; Kaprio, Jaakko; Hofmann, Jonathan N.; Gieger, Christian; Paolisso, Giuseppe; Hjelmborg, Jacob B. H.; Mirabello, Lisa; Seeman, Teresa; Wong, Jason; van der Harst, Pim; Broer, Linda; Kronenberg, Florian; Kollerits, Barbara; Strandberg, Timo; Eisenberg, Dan T. A.; Duggan, Catherine; Verhoeven, Josine E.; Schaakxs, Roxanne; Zannolli, Raffaela; dos Reis, Rosana M. R.; Charchar, Fadi J.; Tomaszewski, Maciej; Mons, Ute; Demuth, Ilja; Iglesias Molli, Andrea Elena; Cheng, Guo; Krasnienkov, Dmytro; D'Antono, Bianca; Kasielski, Marek; McDonnell, Barry J.; Ebstein, Richard Paul; Sundquist, Kristina; Pare, Guillaume; Chong, Michael; Zeegers, Maurice P.; Medical and Molecular Genetics, School of MedicineBackground: Even before the onset of age-related diseases, obesity might be a contributing factor to the cumulative burden of oxidative stress and chronic inflammation throughout the life course. Obesity may therefore contribute to accelerated shortening of telomeres. Consequently, obese persons are more likely to have shorter telomeres, but the association between body mass index (BMI) and leukocyte telomere length (TL) might differ across the life span and between ethnicities and sexes. Objective: A collaborative cross-sectional meta-analysis of observational studies was conducted to investigate the associations between BMI and TL across the life span. Design: Eighty-seven distinct study samples were included in the meta-analysis capturing data from 146,114 individuals. Study-specific age- and sex-adjusted regression coefficients were combined by using a random-effects model in which absolute [base pairs (bp)] and relative telomere to single-copy gene ratio (T/S ratio) TLs were regressed against BMI. Stratified analysis was performed by 3 age categories ("young": 18-60 y; "middle": 61-75 y; and "old": >75 y), sex, and ethnicity. Results: Each unit increase in BMI corresponded to a -3.99 bp (95% CI: -5.17, -2.81 bp) difference in TL in the total pooled sample; among young adults, each unit increase in BMI corresponded to a -7.67 bp (95% CI: -10.03, -5.31 bp) difference. Each unit increase in BMI corresponded to a -1.58 × 10(-3) unit T/S ratio (0.16% decrease; 95% CI: -2.14 × 10(-3), -1.01 × 10(-3)) difference in age- and sex-adjusted relative TL in the total pooled sample; among young adults, each unit increase in BMI corresponded to a -2.58 × 10(-3) unit T/S ratio (0.26% decrease; 95% CI: -3.92 × 10(-3), -1.25 × 10(-3)). The associations were predominantly for the white pooled population. No sex differences were observed. Conclusions: A higher BMI is associated with shorter telomeres, especially in younger individuals. The presently observed difference is not negligible. Meta-analyses of longitudinal studies evaluating change in body weight alongside change in TL are warranted.Item Planning Autonomous Driving with Compact Road Profiles(IEEE Xplore, 2021-09) Wang, Zheyuan; Cheng, Guo; Zheng, Jiang Yu; Computer and Information Science, School of ScienceCurrent sensing and control of self-driving vehicles based on full-view recognition is hard to keep a high-frequency with a fast moving vehicle, as increasingly complex computation is employed to cope with the variations of driving environment. This work, however, explores a light-weight sensing-planning framework for autonomous driving. Taking the advantage that a vehicle moves along a smooth path, we only locate several sampling lines in the view to scan the road, vehicles and environments continuously, which generates a fraction of full video data. We have applied semantic segmentation to the streaming road profiles without redundant data computing. In this paper, we plan vehicle path/motion based on this minimum data set that contains essential information for driving. Based on the lane, headway length, and vehicle motion detected from road/motion profiles, a path and speed of ego-vehicle as well as the interaction with surrounding vehicles are computed. This sensing-planning scheme based on spatially sparse yet temporally dense data can ensure a fast response to events, which yields smooth driving in busy traffic flow.Item SE3: Sequential Semantic Segmentation of Large Images with Minimized Memory(IEEE, 2022-08) Cheng, Guo; Zheng, Jiang Yu; Computer and Information Science, School of ScienceSemantic segmentation results in pixel-wise perception accompanied with GPU computation and expensive memory, which makes trained models hard to apply to small devices in testing. Assuming the availability of hardware in training CNN backbones, this work converts them to a linear architecture enabling the inference on edge devices. Keeping the same accuracy as patch-mode testing, we segment images using a scanning line with the minimum memory. Exploring periods of pyramid network shifting on image, we perform such sequential semantic segmentation (SE3) with a circular memory to avoid redundant computation and preserve the same receptive field as patches for spatial dependency. In the experiments on large drone images and panoramas, we examine this approach in terms of accuracy, parameter memory, and testing speed. Benchmark evaluations demonstrate that, with only one-line computation in linear time, our designed SE3 network consumes a small fraction of memory to maintain an equivalent accuracy as the image segmentation in patches. Considering semantic segmentation for high-resolution images, particularly for data streamed from sensors, this method is significant to the real-time applications of CNN based networks on light-weighted edge devices.Item Semantic Segmentation of Road Profiles for Efficient Sensing in Autonomous Driving(IEEE, 2019-06) Cheng, Guo; Zheng, Jiang Yu; Kilicarslan, Mehmet; Computer and Information Science, School of ScienceIn vision-based autonomous driving, understanding spatial layout of road and traffic is required at each moment. This involves the detection of road, vehicle, pedestrian, etc. in images. In driving video, the spatial positions of various patterns are further tracked for their motion. This spatial-to-temporal approach inherently demands a large computational resource. In this work, however, we take a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. We sample one-pixel line at each frame in driving video, and the temporal congregation of lines from consecutive frames forms a road profile image. The temporal connection of lines also provides layout information of road and surrounding environment. This method reduces the processing data to a fraction of video in order to catch up vehicle moving speed. The key issue now is to know different regions in the road profile; the road profile is divided in real time to road, roadside, lane mark, vehicle, etc. as well as motion events such as stopping and turning of ego-vehicle. We show in this paper that the road profile can be learned through Semantic Segmentation. We use RGB-F images of the road profile to implement Semantic Segmentation to grasp both individual regions and their spatial relations on road effectively. We have tested our method on naturalistic driving video and the results are promising.Item Sequential Semantic Segmentation of Road Profiles for Path and Speed Planning(IEEE, 2022-12) Cheng, Guo; Yu Zheng, Jiang; Computer and Information Science, School of ScienceDriving video is available from in-car camera for road detection and collision avoidance. However, consecutive video frames in a large volume have redundant scene coverage during vehicle motion, which hampers real-time perception in autonomous driving. This work utilizes compact road profiles (RP) and motion profiles (MP) to identify path regions and dynamic objects, which drastically reduces video data to a lower dimension and increases sensing rate. To avoid collision in a close range and navigate a vehicle in middle and far ranges, several RP/MPs are scanned continuously from different depths for vehicle path planning. We train deep network to implement semantic segmentation of RP in the spatial-temporal domain, in which we further propose a temporally shifting memory for online testing. It sequentially segments every incoming line without latency by referring to a temporal window. In streaming-mode, our method generates real-time output of road, roadsides, vehicles, pedestrians, etc. at discrete depths for path planning and speed control. We have experimented our method on naturalistic driving videos under various weather and illumination conditions. It reached the highest efficiency with the least amount of data.Item Sequential Semantic Segmentation of Streaming Scenes for Autonomous Driving(2022-12) Cheng, Guo; Zheng, Jiang Yu; Tuceryan, Mihran; Mukhopadhyay, Snehasis; Tsechpenakis, Gavriil; Mohler, GeorgeIn traffic scene perception for autonomous vehicles, driving videos are available from in-car sensors such as camera and LiDAR for road detection and collision avoidance. There are some existing challenges in computer vision tasks for video processing, including object detection and tracking, semantic segmentation, etc. First, due to that consecutive video frames have a large data redundancy, traditional spatial-to-temporal approach inherently demands huge computational resource. Second, in many real-time scenarios, targets move continuously in the view as data streamed in. To achieve prompt response with minimum latency, an online model to process the streaming data in shift-mode is necessary. Third, in addition to shape-based recognition in spatial space, motion detection also replies on the inherent temporal continuity in videos. While current works either lack long-term memory for reference or consume a huge amount of computation. The purpose of this work is to achieve strongly temporal-associated sensing results in real-time with minimum memory, which is continually embedded to a pragmatic framework for speed and path planning. It takes a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. It utilizes compact road profiles (RP) and motion profiles (MP) to identify path regions and dynamic objects, which drastically reduces video data to a lower dimension and increases sensing rate. Specifically, we sample one-pixel line at each video frame, the temporal congregation of lines from consecutive frames forms a road profile image; while motion profile consists of the average lines by sampling one-belt pixels at each frame. By applying the dense temporal resolution to compensate the sparse spatial resolution, this method reduces 3D streaming data into 2D image layout. Based on RP and MP under various weather conditions, there have three main tasks being conducted to contribute the knowledge domain in perception and planning for autonomous driving. The first application is semantic segmentation of temporal-to-spatial streaming scenes, including recognition of road and roadside, driving events, objects in static or motion. Since the main vision sensing tasks for autonomous driving are identifying road area to follow and locating traffic to avoid collision, this work tackles this problem by using semantic segmentation upon road and motion profiles. Though one-pixel line may not contain sufficient spatial information of road and objects, the consecutive collection of lines as a temporal-spatial image provides intrinsic spatial layout because of the continuous observation and smooth vehicle motion. Moreover, by capturing the trajectory of pedestrians upon their moving legs in motion profile, we can robustly distinguish pedestrian in motion against smooth background. The experimental results of streaming data collected from various sensors including camera and LiDAR demonstrate that, in the reduced temporal-to-spatial space, an effective recognition of driving scene can be learned through Semantic Segmentation. The second contribution of this work is that it accommodates standard semantic segmentation to sequential semantic segmentation network (SE3), which is implemented as a new benchmark for image and video segmentation. As most state-of-the-art methods are greedy for accuracy by designing complex structures at expense of memory use, which makes trained models heavily depend on GPUs and thus not applicable to real-time inference. Without accuracy loss, this work enables image segmentation at the minimum memory. Specifically, instead of predicting for image patch, SE3 generates output along with line scanning. By pinpointing the memory associated with the input line at each neural layer in the network, it preserves the same receptive field as patch size but saved the computation in the overlapped regions during network shifting. Generally, SE3 applies to most of the current backbone models in image segmentation, and furthers the inference by fusing temporal information without increasing computation complexity for video semantic segmentation. Thus, it achieves 3D association over long-range while under the computation of 2D setting. This will facilitate inference of semantic segmentation on light-weighted devices. The third application is speed and path planning based on the sensing results from naturalistic driving videos. To avoid collision in a close range and navigate a vehicle in middle and far ranges, several RP/MPs are scanned continuously from different depths for vehicle path planning. The semantic segmentation of RP/MP is further extended to multi-depths for path and speed planning according to the sensed headway and lane position. We conduct experiments on profiles of different sensing depths and build up a smoothly planning framework according to their them. We also build an initial dataset of road and motion profiles with semantic labels from long HD driving videos. The dataset is published as additional contribution to the future work in computer vision and autonomous driving.Item Sparse Coding of Weather and Illuminations for ADAS and Autonomous Driving(IEEE, 2018-06) Cheng, Guo; Zheng, Jiang Yu; Murase, Hiroshi; Computer and Information Science, School of ScienceWeather and illumination are critical factors in vision tasks such as road detection, vehicle recognition, and active lighting for autonomous vehicles and ADAS. Understanding the weather and illumination type in a vehicle driving view can guide visual sensing, control vehicle headlight and speed, etc. This paper uses sparse coding technique to identify weather types in driving video, given a set of bases from video samples covering a full spectrum of weather and illumination conditions. We sample traffic and architecture insensitive regions in each video frame for features and obtain clusters of weather and illuminations via unsupervised learning. Then, a set of keys are selected carefully according to the visual appearance of road and sky. For video input, sparse coding of each frame is calculated for representing the vehicle view robustly under a specific illumination. The linear combination of the basis from keys results in weather types for road recognition, active lighting, intelligent vehicle control, etc.