Integrating Data-driven Control Methods with Motion Planning: A Deep Reinforcement Learning-based Approach

dc.contributor.advisorLi, Lingxi
dc.contributor.authorPrabu, Avinash
dc.contributor.otherChen, Yaobin
dc.contributor.otherKing, Brian
dc.contributor.otherTian, Renran
dc.date.accessioned2024-01-08T16:36:07Z
dc.date.available2024-01-08T16:36:07Z
dc.date.issued2023-12
dc.degree.date2023
dc.degree.disciplineElectrical & Computer Engineeringen
dc.degree.grantorPurdue Universityen
dc.degree.levelPh.D.
dc.descriptionIndiana University-Purdue University Indianapolis (IUPUI)en
dc.description.abstractPath-tracking control is an integral part of motion planning in autonomous vehicles, in which the vehicle's lateral and longitudinal positions are controlled by a control system that will provide acceleration and steering angle commands to ensure accurate tracking of longitudinal and lateral movements in reference to a pre-defined trajectory. Extensive research has been conducted to address the growing need for efficient algorithms in this area. In this dissertation, a scenario and machine learning-based data-driven control approach is proposed for a path-tracking controller. Firstly, a Deep Reinforcement Learning model is developed to facilitate the control of longitudinal speed. A Deep Deterministic Policy Gradient algorithm is employed as the primary algorithm in training the reinforcement learning model. The main objective of this model is to maintain a safe distance from a lead vehicle (if present) or track a velocity set by the driver. Secondly, a lateral steering controller is developed using Neural Networks to control the steering angle of the vehicle with the main goal of following a reference trajectory. Then, a path-planning algorithm is developed using a hybrid A* planner. Finally, the longitudinal and lateral control models are coupled together to obtain a complete path-tracking controller that follows a path generated by the hybrid A* algorithm at a wide range of vehicle speeds. The state-of-the-art path-tracking controller is also built using Model Predictive Control and Stanley control to evaluate the performance of the proposed model. The results showed the effectiveness of both proposed models in the same scenario, in terms of velocity error, lateral yaw angle error, and lateral distance error. The results from the simulation show that the developed hybrid A* algorithm has good performance in comparison to the state-of-the-art path planning algorithms.
dc.identifier.urihttps://hdl.handle.net/1805/37697
dc.language.isoen_US
dc.subjectReinforcement Learning
dc.subjectMachine Learning
dc.subjectPath planning
dc.subjectMotion Planning
dc.subjectControl Systems
dc.subjectAutomatic Control
dc.subjectA star
dc.subjectHybrid A star
dc.subjectSelf driving cars
dc.titleIntegrating Data-driven Control Methods with Motion Planning: A Deep Reinforcement Learning-based Approach
dc.typeThesisen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AP_Dissertation.pdf
Size:
11.49 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: