Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Learning in zero–sum team Markov games using factored value functions

Lagoudakis Michael, Parr, R.

Simple record


URIhttp://purl.tuc.gr/dl/dias/FBB6EA9E-B181-4D39-8F6C-4DDB3B0278DA-
Identifierhttp://machinelearning.wustl.edu/mlpapers/paper_files/CN15.pdf-
Languageen-
Extent8 pagesen
TitleLearning in zero–sum team Markov games using factored value functionsen
CreatorLagoudakis Michaelen
CreatorΛαγουδακης Μιχαηλel
CreatorParr, R.en
Content SummaryWe present a new method for learning good strategies in zero-sum Markov games in which each side is composed of multiple agents collaborating against an opposing team of agents. Our method requires full observability and communication during learning, but the learned policies can be executed in a distributed manner. The value function is represented as a factored linear architecture and its structure determines the necessary computational resources and communication bandwidth. This approach permits a tradeoff between simple representations with little or no communication between agents and complex, computationally intensive representations with extensive coordination between agents. Thus, we provide a principled means of using approximation to combat the exponential blowup in the joint action space of the participants. The approach is demonstrated with an example that shows the efficiency gains over naive enumeration. en
Type of ItemΠλήρης Δημοσίευση σε Συνέδριοel
Type of ItemConference Full Paperen
Licensehttp://creativecommons.org/licenses/by/4.0/en
Date of Item2015-11-13-
Date of Publication2002-
SubjectHMMs (Hidden Markov models)en
Subjecthidden markov modelsen
Subjecthmms hidden markov modelsen
Bibliographic CitationM.G. Lagoudakis and R.Parr. (2002, Dec.).Learning in zero–sum team Markov games using factored value functions. [Online]. Available: http://machinelearning.wustl.edu/mlpapers/paper_files/CN15.pdfen

Services

Statistics