am Thumb sm dscf3163  2
Jeannette Bohg (Project leader)
Research Group Leader
am Thumb sm herzog medium
am usc Thumb sm ss
Stefan Schaal
Director
am Thumb sm felix portrait
3 results

2016


Thumb xl pic for website small
Robot Arm Pose Estimation by Pixel-wise Regression of Joint Angles

Widmaier, F., Kappler, D., Schaal, S., Bohg, J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)

Abstract
To achieve accurate vision-based control with a robotic arm, a good hand-eye coordination is required. However, knowing the current configuration of the arm can be very difficult due to noisy readings from joint encoders or an inaccurate hand-eye calibration. We propose an approach for robot arm pose estimation that uses depth images of the arm as input to directly estimate angular joint positions. This is a frame-by-frame method which does not rely on good initialisation of the solution from the previous frames or knowledge from the joint encoders. For estimation, we employ a random regression forest which is trained on synthetically generated data. We compare different training objectives of the forest and also analyse the influence of prior segmentation of the arms on accuracy. We show that this approach improves previous work both in terms of computational complexity and accuracy. Despite being trained on synthetic data only, we demonstrate that the estimation also works on real depth images.

pdf DOI Project Page [BibTex]

2016

pdf DOI Project Page [BibTex]

2015


Thumb xl picture for website
Robot Arm Tracking with Random Decision Forests

Widmaier, F.

Eberhard-Karls-Universität Tübingen, May 2015 (mastersthesis)

Abstract
For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial for successful controlling its motion. Often, pose estimations can be acquired from encoders inside the arm, but they can have significant inaccuracy which makes the use of additional techniques necessary. In this master thesis, a novel approach of robot arm pose estimation is presented, that works on single depth images without the need of prior foreground segmentation or other preprocessing steps. A random regression forest is used, which is trained only on synthetically generated data. The approach improves former work by Bohg et al. by considerably reducing the computational effort both at training and test time. The forest in the new method directly estimates the desired joint angles while in the former approach, the forest casts 3D position votes for the joints, which then have to be clustered and fed into an iterative inverse kinematic process to finally get the joint angles. To improve the estimation accuracy, the standard training objective of the forest training is replaced by a specialized function that makes use of a model-dependent distance metric, called DISP. Experimental results show that the specialized objective indeed improves pose estimation and it is shown that the method, despite of being trained on synthetic data only, is able to provide reasonable estimations for real data at test time.

PDF Project Page [BibTex]

2015

PDF Project Page [BibTex]

2014


Thumb xl screen shot 2014 07 09 at 15.49.27
Robot Arm Pose Estimation through Pixel-Wise Part Classification

Bohg, J., Romero, J., Herzog, A., Schaal, S.

In IEEE International Conference on Robotics and Automation (ICRA) 2014, pages: 3143-3150, IEEE International Conference on Robotics and Automation (ICRA), June 2014 (inproceedings)

Abstract
We propose to frame the problem of marker-less robot arm pose estimation as a pixel-wise part classification problem. As input, we use a depth image in which each pixel is classified to be either from a particular robot part or the background. The classifier is a random decision forest trained on a large number of synthetically generated and labeled depth images. From all the training samples ending up at a leaf node, a set of offsets is learned that votes for relative joint positions. Pooling these votes over all foreground pixels and subsequent clustering gives us an estimate of the true joint positions. Due to the intrinsic parallelism of pixel-wise classification, this approach can run in super real-time and is more efficient than previous ICP-like methods. We quantitatively evaluate the accuracy of this approach on synthetic data. We also demonstrate that the method produces accurate joint estimates on real data despite being purely trained on synthetic data.

video code pdf DOI Project Page [BibTex]

2014

video code pdf DOI Project Page [BibTex]