Ιδρυματικό Αποθετήριο
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Efficient reinforcement learning in adversarial games

Lagoudakis Michael, Skoulakis Ioannis

Πλήρης Εγγραφή


URI: http://purl.tuc.gr/dl/dias/3FB923DA-4C00-4B8D-B671-06DDC7E38ACF
Έτος 2012
Τύπος Πλήρης Δημοσίευση σε Συνέδριο
Άδεια Χρήσης
Λεπτομέρειες
Βιβλιογραφική Αναφορά I. Skoulakis and M. G. Lagoudakis, "Efficient Reinforcement Learning in Adversarial Games," in 2012 IEEE International Conference on Tools with Artificial Intelligence (ICTAI), pp. 704 - 711. doi:10.1109/ICTAI.2012.100 https://doi.org/10.1109/ICTAI.2012.100
Εμφανίζεται στις Συλλογές

Περίληψη

The ability of learning is critical for agents designed to compete in a variety of two-player, turn-taking, tactical adversarial games, such as Backgammon, Othello/Reversi, Chess, Hex, etc. The mainstream approach to learning in such games consists of updating some state evaluation function usually in a Temporal Difference (TD) sense either under the MiniMax optimality criterion or under optimization against a specific opponent. However, this approach is limited by several factors: (a) updates to the evaluation function are incremental, (b) stored samples from past games cannot be utilized, and (c) the quality of each update depends on the current evaluation function due to bootstrapping. In this paper, we present a learning approach based on the Least-Squares Policy Iteration (LSPI) algorithm that overcomes these limitations by focusing on learning a state-action evaluation function. The key advantage of the proposed approach is that the agent can make batch updates to the evaluation function with any collection of samples, can utilize samples from past games, and can make updates that do not depend on the current evaluation function since there is no bootstrapping. We demonstrate the efficiency of the LSPI agent over the TD agent in the classical board game of Othello/Reversi.

Υπηρεσίες

Στατιστικά