Learning Grasping Points with Shape Context




This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.

Author(s): Bohg, Jeannette and Kragic, Danica
Journal: Robotics and Autonomous Systems
Volume: 58
Number (issue): 4
Pages: 362--377
Year: 2010
Month: April
Publisher: North-Holland Publishing Co.

Department(s): Autonomous Motion
Bibtex Type: Article (article)
Paper Type: Journal

Address: Amsterdam, The Netherlands, The Netherlands
DOI: 10.1016/j.robot.2009.10.003
URL: http://dx.doi.org/10.1016/j.robot.2009.10.003
Attachments: pdf


  title = {Learning Grasping Points with Shape Context},
  author = {Bohg, Jeannette and Kragic, Danica},
  journal = {Robotics and Autonomous Systems},
  volume = {58},
  number = {4},
  pages = {362--377},
  publisher = {North-Holland Publishing Co.},
  address = {Amsterdam, The Netherlands, The Netherlands},
  month = apr,
  year = {2010},
  url = {http://dx.doi.org/10.1016/j.robot.2009.10.003},
  month_numeric = {4}