URI | http://purl.tuc.gr/dl/dias/B72A2B56-6E1A-47B6-8C6F-9B89CA2AF27E | - |
Identifier | https://doi.org/10.1142/S0218213014600161 | - |
Language | en | - |
Extent | 21 | en |
Title | Directed policy search for decision making using relevance vector machines
| en |
Creator | Rexakis Ioannis | en |
Creator | Ρεξακης Ιωαννης | el |
Creator | Lagoudakis Michael | en |
Creator | Λαγουδακης Μιχαηλ | el |
Publisher | World Scientific Publishing | en |
Description | Δημοσίευση σε επιστημονικό περιοδικό | el |
Content Summary | Several recent learning approaches in decision making under uncertainty suggest the use of classifiers for representing policies compactly. The space of possible policies, even under such structured representations, is huge and must be searched carefully to avoid computationally expensive policy simulations (rollouts). In our recent work, we proposed a method for directed exploration of policy space using support vector classifiers, whereby rollouts are directed to states around the boundaries between different action choices indicated by the separating hyperplanes in the represented policies. While effective, this method suffers from the growing number of support vectors in the underlying classifiers as the number of training examples increases. In this paper, we propose an alternative method for directed policy search based on relevance vector machines. Relevance vector machines are used both for classification (to represent a policy) and regression (to approximate the corresponding relative action advantage function). Classification is enhanced by anomaly detection for accurate policy representation. Exploiting the internal structure of the regressor, we guide the probing of the state space only to critical areas corresponding to changes of action dominance in the underlying policy. This directed focus on critical parts of the state space iteratively leads to refinement and improvement of the underlying policy and delivers excellent control policies in only a few iterations, while the small number of relevance vectors yields significant computational time savings. We demonstrate the proposed approach and compare it with our previous method on standard reinforcement learning domains (inverted pendulum and mountain car).
| en |
Type of Item | Peer-Reviewed Journal Publication | en |
Type of Item | Δημοσίευση σε Περιοδικό με Κριτές | el |
License | http://creativecommons.org/licenses/by/4.0/ | en |
Date of Item | 2015-10-27 | - |
Date of Publication | 2014 | - |
Subject | Reinforcement learning | en |
Subject | decision making under uncertainty | en |
Subject | classification | en |
Bibliographic Citation | I. Rexakis, M. Lagoudakis , "Directed policy search for decision making using relevance vector machines," International Journal on Artificial Intelligence Tools,vol. 23, no.4, Aug. 2014. doi: 10.1142/S0218213014600161
| en |