- Browse by Subject
Browsing by Subject "Autonomous vehicle"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Integration of V2V-AEB system with wearable cardiac monitoring system and reduction of V2V-AEB system time constraints(2017) Bhatnagar, Shalabh; Chien, StanleyAutonomous Emergency Braking (AEB) system uses vehicle’s on-board sensors such as radar, LIDAR, camera, infrared, etc. to detect the potential collisions, alert the driver and make safety braking decision to avoid a potential collision. Its limitation is that it requires clear line-of-sight to detect what is in front of the vehicle. Whereas, in current V2V (vehicle-to-vehicle communication) systems, vehicles communicate with each other over a wireless network and share information about their states. Thus the safety of a V2V system is limited to the vehicles with communication capabilities. Our idea is to integrate the complementary capabilities of V2V and AEB systems together to overcome the limitations of V2V and AEB systems. In a V2V-AEB system, vehicles exchange data about the objects information detected by their onboard sensors along with their locations, speeds, and movements. The object information detected by a vehicle and the information received through the V2V network is processed by the AEB system of the subject vehicle. If there is an imminent crash, the AEB system alerts the driver or applies the brake automatically in critical conditions to prevent the collision. To make V2V-AEB system advance, we have developed an intelligent heart Monitoring system and integrated it with the V2V-AEB system of the vehicle. The advancement of wearable and implantable sensors enables them to communicate driver’s health conditions with PC’s and handheld devices. Part of this thesis work concentrates on monitoring the driver’s heart status in real time by using fitness tracker. In the case of a critical health condition such as the cardiac arrest of a driver, the system informs the vehicle to take an appropriate operation decision and broadcast emergency messages over the V2V network. Thus making other vehicles and emergency services aware of the emergency condition, which can help a driver to get immediate medical attention and prevent accident casualties. To ensure that the effectiveness of the V2V-AEB system is not reduced by a time delay, it is necessary to study the effect of delay thoroughly and to handle them properly. One common practice to control the delayed vehicle trajectory information is to extrapolate trajectory to the current time. We have put forward a dynamic system that can help to reduce the effect of delay in different environments without extrapolating trajectory of the pedestrian. This method dynamically controls the AEB start braking time according to the estimated delay time in the scenario. This thesis also addresses the problem of communication overload caused by V2V-AEB system. If there are n vehicles in a V2V network and each vehicle detects m objects, the message density in the V2V network will be n*m. Processing these many messages by the receiving vehicle will take considerable computation power and cause a delay in making the braking decision. To prevent flooding of messages in V2V-AEB system, some approaches are suggested to reduce the number of messages in the V2V network that include not sending information of objects that do not cause a potential collision and grouping the object information in messages.Item Modeling Spatiotemporal Pedestrian-Environment Interactions for Predicting Pedestrian Crossing Intention from the Ego-View(2021-08) Chen, Chen (Tina); Li, Lingxi; Tian, Renran; Lauren, Christopher; Ding, ZhengmingFor pedestrians and autonomous vehicles (AVs) to co-exist harmoniously and safely in the real-world, AVs will need to not only react to pedestrian actions, but also anticipate their intentions. In this thesis, we propose to use rich visual and pedestrian-environment interaction features to improve pedestrian crossing intention prediction from the ego-view.We do so by combining visual feature extraction, graph modeling of scene objects and their relationships, and feature encoding as comprehensive inputs for an LSTM encoder-decoder network. Pedestrians react and make decisions based on their surrounding environment, and the behaviors of other road users around them. The human-human social relationship has al-ready been explored for pedestrian trajectory prediction from the bird’s eye view in stationary cameras. However, context and pedestrian-environment relationships are often missing incurrent research into pedestrian trajectory, and intention prediction from the ego-view. To map the pedestrian’s relationship to its surrounding objects we use a star graph with the pedestrian in the center connected to all other road objects/agents in the scene. The pedestrian and road objects/agents are represented in the graph through visual features extracted using state of the art deep learning algorithms. We use graph convolutional networks, and graph autoencoders to encode the star graphs in a lower dimension. Using the graph en-codings, pedestrian bounding boxes, and human pose estimation, we propose a novel model that predicts pedestrian crossing intention using not only the pedestrian’s action behaviors(bounding box and pose estimation), but also their relationship to their environment. Through tuning hyperparameters, and experimenting with different graph convolutions for our graph autoencoder, we are able to improve on the state of the art results. Our context-driven method is able to outperform current state of the art results on benchmark datasetPedestrian Intention Estimation (PIE). The state of the art is able to predict pedestrian crossing intention with a balanced accuracy (to account for dataset imbalance) score of 0.61, while our best performing model has a balanced accuracy score of 0.79. Our model especially outperforms in no crossing intention scenarios with an F1 score of 0.56 compared to the state of the art’s score of 0.36. Additionally, we also experiment with training the state of the art model and our model to predict pedestrian crossing action, and intention jointly. While jointly predicting crossing action does not help improve crossing intention prediction, it is an important distinction to make between predicting crossing action versus intention.