Autonomous Motion
Note: This department has relocated.

Using reward-weighted regression for reinforcement learning of task space control

2007

Conference Paper

am

ei


In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

Author(s): Peters, J. and Schaal, S.
Book Title: Proceedings of the 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning
Pages: 262-267
Year: 2007

Department(s): Autonomous Motion, Empirical Inference
Bibtex Type: Conference Paper (inproceedings)

DOI: 10.1109/ADPRL.2007.368197

Address: Honolulu, Hawaii, April 1-5, 2007
Cross Ref: p2672
Note: clmc
URL: http://www-clmc.usc.edu/publications/P/peters-ADPRL2007.pdf

BibTex

@inproceedings{Peters_PIISADPRL_2007,
  title = {Using reward-weighted regression for reinforcement learning of task space control},
  author = {Peters, J. and Schaal, S.},
  booktitle = {Proceedings of the 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning},
  pages = {262-267},
  address = {Honolulu, Hawaii, April 1-5, 2007},
  year = {2007},
  note = {clmc},
  crossref = {p2672},
  doi = {10.1109/ADPRL.2007.368197},
  url = {http://www-clmc.usc.edu/publications/P/peters-ADPRL2007.pdf}
}