ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Tilak, Omkar"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Reinforcement Learning Algorithms for Uncertain, Dynamic, Zero-Sum Games
    (IEEE, 2018-12) Mukhopadhyay, Snehasis; Tilak, Omkar; Chakrabarti, Subir; Computer and Information Science, School of Science
    Dynamic zero-sum games are a model of multiagent decision-making that has been well-studied in the mathematical game theory literature. In this paper, we derive a sufficient condition for the existence of a solution to this problem, and then proceed to discuss various reinforcement learning strategies to solve such a dynamic game in the presence of uncertainty where the game matrices at various states as well as the transition probabilities between the states under different agent actions are unknown. A novel algorithm, based on heterogeneous games of learning automata (HEGLA), as well as algorithms based on model-based and model-free reinforcement learning, are presented as possible approaches to learning the solution Markov equilibrium policies when they are assumed to satisfy the sufficient conditions for existence. The HEGLA algorithm involves automata simultaneously playing zero-sum games with some automata and identical pay-off games with some other automata. Simulation studies are reported to complement the theoretical and algorithmic discussions.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University