ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Tian, Renran"

Now showing 1 - 10 of 14
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A Computationally Effective Pedestrian Detection using Constrained Fusion with Body Parts for Autonomous Driving
    (IEEE, 2021) Islam, Muhammad Mobaidul; Newaz, Abdullah Al Redwan; Tian, Renran; Homaifar, Abdollah; Karimoddini, Ali; Computer Information and Graphics Technology, School of Engineering and Technology
    This paper addresses the problem of detecting pedestrians using an enhanced object detection method. In particular, the paper considers the occluded pedestrian detection problem in autonomous driving scenarios where the balance of performance between accuracy and speed is crucial. Existing works focus on learning representations of unique persons independent of body parts semantics. To achieve a real-time performance along with robust detection, we introduce a body parts based pedestrian detection architecture where body parts are fused through a computationally effective constraint optimization technique. We demonstrate that our method significantly improves detection accuracy while adding negligible runtime overhead. We evaluate our method using a real-world dataset. Experimental results show that the proposed method outperforms existing pedestrian detection methods.
  • Loading...
    Thumbnail Image
    Item
    Assessing the Effectiveness of In-Vehicle Highway Back-of-Queue Alerting System
    (The National Academies of Sciences, Engineering, and Medicine, 2021-01) Shen, Dan; Zhang, Zhengming; Ruan, Keyu; Tian, Renran; Li, Lingxi; Li, Feng; Chen, Yaobin; Sturdevant, Jim; Cox, Ed; Electrical and Computer Engineering, School of Engineering and Technology
    This paper proposes an in-vehicle back-of-queue alerting system that is able to issue alerting messages to drivers on highways approaching traffic queues. A prototype system was implemented to deliver the in-vehicle alerting messages to drivers via an Android-based smartphone app. To assess its effectiveness, a set of test scenarios were designed and implemented on a state-of-the-art driving simulator. Subjects were recruited and their testing data was collected under two driver states (normal and distracted) and three alert types (no alerts, roadside alerts, and in-vehicle auditory alerts). The effectiveness was evaluated using three parameters of interest: 1) the minimum Time-to-Collision (mTTC), 2) the maximum deceleration, and 3) the maximum lateral acceleration. Statistical models were utilized to examine the usefulness and benefits of each alerting type. The results show that the in-vehicle auditory alert is the most effective way for delivering alerting messages to drivers. More specifically, it significantly increases the mTTC (30% longer than that of 'no warning') and decreases the maximum lateral acceleration (60% less than that of 'no warning'), which provides drivers with more reaction time and improves driving stability of their vehicles. The effects of driver distraction significantly decrease the efficiency of roadside traffic sign alert. More specifically, when the driver is distracted, the roadside traffic sign alert performs significantly worse in terms of mTTC compared with that of normal driving. This highlights the importance of the in-vehicle auditory alert when the driver is distracted.
  • Loading...
    Thumbnail Image
    Item
    Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera Setup
    (2020-12) Betrabet, Siddhant S.; Tian, Renran; Zhu, Likun; Anwar, Sohel
    Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects. These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy. The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
  • Loading...
    Thumbnail Image
    Item
    E-scooter Rider Detection System in Driving Environments
    (2021-08) Apurv, Kumar; Zheng, Jiang; Tian, Renran; Tsechpenakis, Gavriil
    E-scooters are ubiquitous and their number keeps escalating, increasing their interactions with other vehicles on the road. E-scooter riders have an atypical behavior that varies enormously from other vulnerable road users, creating new challenges for vehicle active safety systems and automated driving functionalities. The detection of e-scooter riders by other vehicles is the first step in taking care of the risks. This research presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians and a benchmark dataset for e-scooter riders in natural environments. An efficient system pipeline built using two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2, performs detection of these vulnerable e-scooter riders.
  • Loading...
    Thumbnail Image
    Item
    Flexible and Scalable Annotation Tool to Develop Scene Understanding Datasets
    (National Science Foundation, 2022) Elahi, Md Fazle; Tian, Renran; Luo, Xiao; Electrical and Computer Engineering, Purdue School of Engineering and Technology
    Recent progress in data-driven vision and language-based tasks demands developing training datasets enriched with multiple modalities representing human intelligence. The link between text and image data is one of the crucial modalities for developing AI models. The development process of such datasets in the video domain requires much effort from researchers and annotators (experts and non-experts). Researchers re-design annotation tools to extract knowledge from annotators to answer new research questions. The whole process repeats for each new question which is time consuming. However, since the last decade, there has been little change in how the researchers and annotators interact with the annotation process. We revisit the annotation workflow and propose a concept of an adaptable and scalable annotation tool. The concept emphasizes its users’ interactivity to make annotation process design seamless and efficient. Researchers can conveniently add newer modalities to or augment the extant datasets using the tool. The annotators can efficiently link free-form text to image objects. For conducting human-subject experiments on any scale, the tool supports the data collection for attaining group ground truth. We have conducted a case study using a prototype tool between two groups with the participation of 74 non-expert people. We find that the interactive linking of free-form text to image objects feels intuitive and evokes a thought process resulting in a high-quality annotation. The new design shows ≈ 35% improvement in the data annotation quality. On UX evaluation, we receive above-average positive feedback from 25 people regarding convenience, UI assistance, usability, and satisfaction.
  • Loading...
    Thumbnail Image
    Item
    Implementation and Performance Evaluation of In-vehicle Highway Back-of-Queue Alerting System Using the Driving Simulator
    (IEEE Xplore, 2021-09) Zhang, Zhengming; Shen, Dan; Tian, Renran; Li, Lingxi; Chen, Yaobin; Sturdevant, Jim; Cox, Ed; Electrical and Computer Engineering, School of Engineering and Technology
    This paper proposes a prototype in-vehicle highway back-of-queue alerting system that is based on an Android-based smartphone app, which is capable of delivering warning information to on-road drivers approaching traffic queues. To evaluate the effectiveness of this alerting system, subjects were recruited to participate in the designed test scenarios on a driving simulator. The test scenarios include three warning types (no alerts, roadside alerts, and in-vehicle auditory alerts), three driver states (normal, distracted, and drowsy), and two weather conditions (sunny and foggy). Driver responses related to vehicle dynamics data were collected and analyzed. The results indicate that on average, the drowsy state decreases the minimum time-to-collision by 1.6 seconds compared to the normal state. In-vehicle auditory alerts can effectively increase the driving safety across different combinations of situations (driver states and weather conditions), while roadside alerts perform better than no alerts.
  • Loading...
    Thumbnail Image
    Item
    Integrating Data-driven Control Methods with Motion Planning: A Deep Reinforcement Learning-based Approach
    (2023-12) Prabu, Avinash; Li, Lingxi; Chen, Yaobin; King, Brian; Tian, Renran
    Path-tracking control is an integral part of motion planning in autonomous vehicles, in which the vehicle's lateral and longitudinal positions are controlled by a control system that will provide acceleration and steering angle commands to ensure accurate tracking of longitudinal and lateral movements in reference to a pre-defined trajectory. Extensive research has been conducted to address the growing need for efficient algorithms in this area. In this dissertation, a scenario and machine learning-based data-driven control approach is proposed for a path-tracking controller. Firstly, a Deep Reinforcement Learning model is developed to facilitate the control of longitudinal speed. A Deep Deterministic Policy Gradient algorithm is employed as the primary algorithm in training the reinforcement learning model. The main objective of this model is to maintain a safe distance from a lead vehicle (if present) or track a velocity set by the driver. Secondly, a lateral steering controller is developed using Neural Networks to control the steering angle of the vehicle with the main goal of following a reference trajectory. Then, a path-planning algorithm is developed using a hybrid A* planner. Finally, the longitudinal and lateral control models are coupled together to obtain a complete path-tracking controller that follows a path generated by the hybrid A* algorithm at a wide range of vehicle speeds. The state-of-the-art path-tracking controller is also built using Model Predictive Control and Stanley control to evaluate the performance of the proposed model. The results showed the effectiveness of both proposed models in the same scenario, in terms of velocity error, lateral yaw angle error, and lateral distance error. The results from the simulation show that the developed hybrid A* algorithm has good performance in comparison to the state-of-the-art path planning algorithms.
  • Loading...
    Thumbnail Image
    Item
    Modeling Spatiotemporal Pedestrian-Environment Interactions for Predicting Pedestrian Crossing Intention from the Ego-View
    (2021-08) Chen, Chen (Tina); Li, Lingxi; Tian, Renran; Lauren, Christopher; Ding, Zhengming
    For pedestrians and autonomous vehicles (AVs) to co-exist harmoniously and safely in the real-world, AVs will need to not only react to pedestrian actions, but also anticipate their intentions. In this thesis, we propose to use rich visual and pedestrian-environment interaction features to improve pedestrian crossing intention prediction from the ego-view.We do so by combining visual feature extraction, graph modeling of scene objects and their relationships, and feature encoding as comprehensive inputs for an LSTM encoder-decoder network. Pedestrians react and make decisions based on their surrounding environment, and the behaviors of other road users around them. The human-human social relationship has al-ready been explored for pedestrian trajectory prediction from the bird’s eye view in stationary cameras. However, context and pedestrian-environment relationships are often missing incurrent research into pedestrian trajectory, and intention prediction from the ego-view. To map the pedestrian’s relationship to its surrounding objects we use a star graph with the pedestrian in the center connected to all other road objects/agents in the scene. The pedestrian and road objects/agents are represented in the graph through visual features extracted using state of the art deep learning algorithms. We use graph convolutional networks, and graph autoencoders to encode the star graphs in a lower dimension. Using the graph en-codings, pedestrian bounding boxes, and human pose estimation, we propose a novel model that predicts pedestrian crossing intention using not only the pedestrian’s action behaviors(bounding box and pose estimation), but also their relationship to their environment. Through tuning hyperparameters, and experimenting with different graph convolutions for our graph autoencoder, we are able to improve on the state of the art results. Our context-driven method is able to outperform current state of the art results on benchmark datasetPedestrian Intention Estimation (PIE). The state of the art is able to predict pedestrian crossing intention with a balanced accuracy (to account for dataset imbalance) score of 0.61, while our best performing model has a balanced accuracy score of 0.79. Our model especially outperforms in no crossing intention scenarios with an F1 score of 0.56 compared to the state of the art’s score of 0.36. Additionally, we also experiment with training the state of the art model and our model to predict pedestrian crossing action, and intention jointly. While jointly predicting crossing action does not help improve crossing intention prediction, it is an important distinction to make between predicting crossing action versus intention.
  • Loading...
    Thumbnail Image
    Item
    Pedestrian/Bicyclist Limb Motion Analysis from 110-Car TASI Video Data for Autonomous Emergency Braking Testing Surrogate Development
    (SAE, 2016-04) Sherony, Rini; Tian, Renran; Chien, Stanley; Fu, Li; Chen, Yaobin; Takahashi, Hiroyuki; Department of Engineering Technology, School of Engineering and Technology
    Many vehicles are currently equipped with active safety systems that can detect vulnerable road users like pedestrians and bicyclists, to mitigate associated conflicts with vehicles. With the advancements in technologies and algorithms, detailed motions of these targets, especially the limb motions, are being considered for improving the efficiency and reliability of object detection. Thus, it becomes important to understand these limb motions to support the design and evaluation of many vehicular safety systems. However in current literature, there is no agreement being reached on whether or not and how often these limbs move, especially at the most critical moments for potential crashes. In this study, a total of 832 pedestrian walking or cyclist biking cases were randomly selected from one large-scale naturalistic driving database containing 480,000 video segments with a total size of 94TB, and then the 832 video clips were analyzed focusing on their limb motions. We modeled the pedestrian/bicyclist limb motions in four layers: (1) the percentages of pedestrians and bicyclists who have limb motions when crossing the road; (2) the averaged action frequency and the corresponding distributions on when there are limb motions; (3) comparisons of the limb motion behavior between crossing and non-crossing cases; and (4) the effects of seasons on the limb motions when the pedestrians/bicyclists are crossing the road. The results of this study can provide empirical foundations supporting surrogate development, benefit analysis, and standardized testing of vehicular pedestrian/bicyclist detection and crash mitigation systems.
  • Loading...
    Thumbnail Image
    Item
    Peek into the Future Camera-based Occupant Sensing in Configurable Cabins for Autonomous Vehicles
    (IEEE Xplore, 2021-09-19) Prabu, Avinash; Tian, Renran; Li, Lingxi; Le, Jialiang; Sundararajan, Srinivasan; Barbat, Saeed; Electrical and Computer Engineering, School of Engineering and Technology
    The development of fully autonomous vehicles (AVs) can potentially eliminate drivers and introduce unprecedented seating design. However, highly flexible seat configurations may lead to occupants' unconventional poses and actions. Understanding occupant behaviors and prioritize safety features become eye-catching topics in the AV research frontier. Visual sensors have the advantages of cost-efficiency and high-fidelity imaging and become more widely applied for in-car sensing purposes. Occlusion is one big concern for this type of system in crowded car cabins. It is important but largely unknown about how a visual-sensing framework will look like to support 2-D and 3-D human pose tracking towards highly configurable seats. As one of the first studies to touch this topic, we peek into the future camera-based sensing framework via a simulation experiment. Constructed representative car-cabin, seat layouts, and occupant sizes, camera coverage from different angles and positions is simulated and calculated. The comprehensive coverage data are synthesized through an optimization process to determine the camera layout and overall occupant coverage. The results show the needs and design of a different number of cameras to fully or partially cover all the occupants with changeable configurations of up to six seats.
  • «
  • 1 (current)
  • 2
  • »
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University