Ιδρυματικό Αποθετήριο
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Decentralized bayesian reinforcement learning for online agent collaboration

Parr G., Farinelli A., Rogers A. , Chalkiadakis Georgios, Jennings N. R., McClean S., Teacy W. T. L.

Πλήρης Εγγραφή


URI: http://purl.tuc.gr/dl/dias/E504E59F-683F-49F8-9E3F-41D26D6FC513
Έτος 2012
Τύπος Πλήρης Δημοσίευση σε Συνέδριο
Άδεια Χρήσης
Λεπτομέρειες
Βιβλιογραφική Αναφορά W. T. L. Leacy, G. Chalkiadakis, A. Farinelli, A. Rogers, N. R. Jennings, S. McClean and G. Parr, "Decentralized bayesian reinforcement learning for online agent collaboration," presented at 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain, 2012.
Εμφανίζεται στις Συλλογές

Περίληψη

Solving complex but structured problems in a decentralized manner via multiagent collaboration has received much attention in recent years. This is natural, as on one hand, multiagent systems usu- ally possess a structure that determines the allowable interactions among the agents; and on the other hand, the single most pressing need in a cooperative multiagent system is to coordinate the local policies of autonomous agents with restricted capabilities to serve a system-wide goal. The presence of uncertainty makes this even more challenging, as the agents face the additional need to learn the unknown environment parameters while forming (and follow- ing) local policies in an online fashion. In this paper, we provide the first Bayesian reinforcement learning (BRL) approach for dis- tributed coordination and learning in a cooperative multiagent sys- tem by devising two solutions to this type of problem. More specif- ically, we show how the Value of Perfect Information (VPI) can be used to perform efficient decentralised exploration in both model- based and model-free BRL, and in the latter case, provide a closed form solution for VPI, correcting a decade old result by Dearden, Friedman and Russell. To evaluate these solutions, we present ex- perimental results comparing their relative merits, and demonstrate empirically that both solutions outperform an existing multiagent learning method, representative of the state-of-the-art.

Υπηρεσίες

Στατιστικά