- Browse by Subject
Browsing by Subject "decentralized control"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Identification and Optimal Control of Large-Scale Systems Using Selective Decentralization(IEEE, 2016-10) Nguyen, Thanh; Mukhopadhyay, Snehasis; Computer and Information Science, School of ScienceIn this paper, we explore the capability of selective decentralization in improving the control performance for unknown large-scale systems using model-based approaches. In selective decentralization, we explore all of the possible communication policies among subsystems and show that with the appropriate switching among the resulting multiple identification models (with corresponding communication policies), such selective decentralization significantly outperforms a centralized identification model when the system is weakly interconnected, and performs at least equivalent to the centralized model when the system is strongly interconnected. To derive the sub-optimal control, our control design include two phases. First, we apply system identification to train the approximation model for the unknown system. Second, we find the suboptimal solution of the Halminton-Jacobi-Bellman (HJB) equation to derive the suboptimal control. In linear systems, the HJB equation transforms to the well-solved Riccati equation with closed-form solution. In nonlinear systems, we discretize the approximation model in order to acquire the control unit by using dynamic programming methods for the resulting Markov Decision Process (MDP). We compare the performance among the selective decentralization, the complete decentralization and the centralization in our two-phase control design. Our results show that selective decentralization outperforms the complete decentralization and the centralization approaches when the systems are completely decoupled or strongly interconnected.Item Two-phase Selective Decentralization to Improve Reinforcement Learning Systems with MDP(IOS, 2018-06) Nguyen, Thanh; Mukhopadhyay, Snehasis; Computer and Information Science, School of ScienceIn this paper, we explore the capability of selective decentralization in improving the reinforcement learning performance for unknown systems using model-based approaches. In selective decentralization, we automatically select the best communication policies among agents. Our learning design, which is built on the control system principles, includes two phases. First, we apply system identification to train an approximated model for the unknown systems. Second, we find the suboptimal solution of the Hamilton–Jacobi–Bellman (HJB) equation to derive the suboptimal control. For linear systems, the HJB equation transforms to the well-known Riccati equation with closed-form solution. In nonlinear system, we discretize the approximation model as a Markov Decision Process (MDP) in order to determine the control using dynamic programming algorithms. Since the theoretical foundation of using MDP to control the nonlinear system has not been thoroughly developed, we prove that the control law learned by the discrete-MDP approach is guarantee to stabilize the system, which is the learning goal, given several sufficient conditions. These learning and control techniques could be applied in centralized, completely decentralized and selectively decentralized manner. Our results show that selective decentralization outperforms the complete decentralization and the centralization approaches when the systems are completely decoupled or strongly interconnected.