Selectively decentralized reinforcement learning

dc.contributor.advisorMukhopadhyay, Snehasis
dc.contributor.authorNguyen, Thanh Minh
dc.date.accessioned2018-08-10T20:19:50Z
dc.date.available2018-08-10T20:19:50Z
dc.date.issued2018-05
dc.degree.date2018en_US
dc.degree.grantorPurdue Universityen_US
dc.degree.levelPh.D.en_US
dc.descriptionIndiana University-Purdue University Indianapolis (IUPUI)en_US
dc.description.abstractThe main contributions in this thesis include the selectively decentralized method in solving multi-agent reinforcement learning problems and the discretized Markov-decision-process (MDP) algorithm to compute the sub-optimal learning policy in completely unknown learning and control problems. These contributions tackle several challenges in multi-agent reinforcement learning: the unknown and dynamic nature of the learning environment, the difficulty in computing the closed-form solution of the learning problem, the slow learning performance in large-scale systems, and the questions of how/when/to whom the learning agents should communicate among themselves. Through this thesis, the selectively decentralized method, which evaluates all of the possible communicative strategies, not only increases the learning speed, achieves better learning goals but also could learn the communicative policy for each learning agent. Compared to the other state-of-the-art approaches, this thesis’s contributions offer two advantages. First, the selectively decentralized method could incorporate a wide range of well-known algorithms, including the discretized MDP, in single-agent reinforcement learning; meanwhile, the state-of-the-art approaches usually could be applied for one class of algorithms. Second, the discretized MDP algorithm could compute the sub-optimal learning policy when the environment is described in general nonlinear format; meanwhile, the other state-of-the-art approaches often assume that the environment is in limited format, particularly in feedback-linearization form. This thesis also discusses several alternative approaches for multi-agent learning, including Multidisciplinary Optimization. In addition, this thesis shows how the selectively decentralized method could successfully solve several real-worlds problems, particularly in mechanical and biological systems.en_US
dc.identifier.doi10.7912/C24D4B
dc.identifier.urihttps://hdl.handle.net/1805/17103
dc.identifier.urihttp://dx.doi.org/10.7912/C2/2359
dc.language.isoen_USen_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/
dc.subjectSelective Decentralizationen_US
dc.subjectReinforcement Learningen_US
dc.subjectMarkov Decision Processen_US
dc.subjectMultidisiplinary Optimizationen_US
dc.subjectAdaptive Controlen_US
dc.titleSelectively decentralized reinforcement learningen_US
dc.typeThesisen
thesis.degree.disciplineComputer & Information Scienceen
Files
Original bundle
Now showing 1 - 5 of 7
Loading...
Thumbnail Image
Name:
Thesis_PurdueFormat - June21.pdf
Size:
1.59 MB
Format:
Adobe Portable Document Format
Description:
Thesis
Loading...
Thumbnail Image
Name:
paper1.pdf
Size:
173.39 KB
Format:
Adobe Portable Document Format
Description:
Publication 1
Loading...
Thumbnail Image
Name:
paper2.pdf
Size:
350.27 KB
Format:
Adobe Portable Document Format
Description:
Publication 2
Loading...
Thumbnail Image
Name:
paper3.pdf
Size:
661.35 KB
Format:
Adobe Portable Document Format
Description:
Publication 3
Loading...
Thumbnail Image
Name:
paper4.pdf
Size:
328.8 KB
Format:
Adobe Portable Document Format
Description:
Publication 4
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: