Grasping familiar objects using shape context

2009

Conference Paper

am


We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.

Author(s): Bohg, J. and Kragic, D.
Book Title: Advanced Robotics, 2009. ICAR 2009. International Conference on
Pages: 1-6
Year: 2009

Department(s): Autonomous Motion
Bibtex Type: Conference Paper (inproceedings)
Attachments: pdf
slides

BibTex

@inproceedings{5174710,
  title = {Grasping familiar objects using shape context},
  author = {Bohg, J. and Kragic, D.},
  booktitle = {Advanced Robotics, 2009. ICAR 2009. International Conference on},
  pages = {1-6},
  year = {2009}
}