Header logo is am

Learning Objective Functions for Manipulation

2013

Conference Paper

am

mg


We present an approach to learning objective func- tions for robotic manipulation based on inverse reinforcement learning. Our path integral inverse reinforcement learning al- gorithm can deal with high-dimensional continuous state-action spaces, and only requires local optimality of demonstrated trajectories. We use L1 regularization in order to achieve feature selection, and propose an efficient algorithm to minimize the re- sulting convex objective function. We demonstrate our approach by applying it to two core problems in robotic manipulation. First, we learn a cost function for redundancy resolution in inverse kinematics. Second, we use our method to learn a cost function over trajectories, which is then used in optimization- based motion planning for grasping and manipulation tasks. Experimental results show that our method outperforms previous algorithms in high-dimensional settings.

Author(s): Kalakrishnan, M. and Pastor, P. and Righetti, L. and Schaal, S.
Book Title: IEEE International Conference on Robotics and Automation
Year: 2013

Department(s): Autonomous Motion, Movement Generation and Control
Research Project(s): Autonomous Robotic Manipulation
Inverse Optimal Control
Bibtex Type: Conference Paper (inproceedings)

Cross Ref: p10529
Note: clmc

Links: PDF

BibTex

@inproceedings{kalakrishnanICRA2013,
  title = {Learning Objective Functions for Manipulation},
  author = {Kalakrishnan, M. and Pastor, P. and Righetti, L. and Schaal, S.},
  booktitle = {IEEE International Conference on Robotics and Automation},
  year = {2013},
  note = {clmc},
  crossref = {p10529}
}