- Browse by Subject
Browsing by Subject "selective decentralization"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Selective decentralization to improve reinforcement learning in unknown linear noisy systems(IEEE, 2017-11) Nguyen, Thanh; Mukhopadhyay, Snehasis; Computer and Information Science, School of ScienceIn this paper, we answer the question of to what extend selective decentralization could enhance the learning and control performance when the system is noisy and unknown. Compared to the previous works in selective decentralization, in this paper, we add the system noise as another complexity in the learning and control problem. Thus, we only perform analysis for some simple toy examples of noisy linear system. In linear system, the Halminton-Jaccobi-Bellman (HJB) equation becomes Riccati equation with closed-form solution. Our previous framework in learning and control unknown system is based on the following principle: approximating the system using identification in order to apply model-based solution. Therefore, this paper would explore the learning and control performance on two aspects: system identification error and system stabilization. Our results show that selective decentralization show better learning performance than the centralization when the noise level is low.Item Selectively Decentralized Q-Learning(IEEE, 2017-10) Nguyen, Thanh; Mukhopadhyay, Snehasis; Computer and Information Science, School of ScienceIn this paper, we explore the capability of selectively decentralized Q-learning approach in learning how to optimally stabilize control systems, as compared to the centralized approach. We focus on problems in which the systems are completely unknown except the possible domain knowledge that allow us to decentralize into subsystems. In selective decentralization, we explore all of the possible communication policies among subsystems and use the cumulative gained Q-value as the metric to decide which decentralization scheme should be used for controlling. The results show that the selectively decentralized approach not only stabilizes the system faster but also shows superior converging speed on gained Q-value in different systems with different interconnection strength. In addition, the selectively decentralized converging time does not seem to grow exponentially with the system dimensionality. Practically, this fact implies that the selectively decentralized Q-learning could be used as an alternative approach in large-scale unknown control system, where in theory, the Hamilton-Jacobi-Bellman-equation approach is difficult to derive the close-form solution.