Header logo is am



no image
Using Torque Redundancy to Optimize Contact Forces in Legged Robots

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

In Redundancy in Robot Manipulators and Multi-Robot Systems, 57, pages: 35-51, Lecture Notes in Electrical Engineering, Springer Berlin Heidelberg, 2013 (incollection)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In the following, we present an inverse dynamics controller that exploits torque redundancy to directly and explicitly minimize any combination of linear and quadratic costs in the contact constraints and in the commands. Such a result is particularly relevant for legged robots as it allows to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, it can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The proposed controller is very simple and computationally efficient, and most importantly it can greatly improve the performance of legged locomotion on difficult terrains as can be seen in the experimental results.

link (url) [BibTex]

link (url) [BibTex]

2011


Thumb xl screen shot 2015 08 23 at 15.47.13
Multi-Modal Scene Understanding for Robotic Grasping

Bohg, J.

(2011:17):vi, 194, Trita-CSC-A, KTH Royal Institute of Technology, KTH, Computer Vision and Active Perception, CVAP, Centre for Autonomous Systems, CAS, KTH, Centre for Autonomous Systems, CAS, December 2011 (phdthesis)

Abstract
Current robotics research is largely driven by the vision of creating an intelligent being that can perform dangerous, difficult or unpopular tasks. These can for example be exploring the surface of planet mars or the bottom of the ocean, maintaining a furnace or assembling a car. They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, are already frequently performed by robots. Others are still completely out of reach. Especially, household robots are far away from being deployable as general purpose devices. Although advancements have been made in this research area, robots are not yet able to perform household chores robustly in unstructured and open-ended environments given unexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual and motor capabilities are necessary for the robot to perform common tasks in a household scenario. In that context, an essential capability is to understand the scene that the robot has to interact with. This involves separating objects from the background but also from each other.Once this is achieved, many other tasks become much easier. Configuration of object scan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and even state-of-the-art methods may fail. Given an incomplete, noisy and potentially erroneously segmented scene model, the questions remain how suitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of prediction mechanisms that allow it to hypothesize about parts of the scene it has not yet observed. Additionally, the robot can also quantify how uncertain it is about this prediction allowing it to plan actions for exploring the scene at specifically uncertain places. We consider multiple modalities including monocular and stereo vision, haptic sensing and information obtained through a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modal exploration, grasps can be inferred for each object hypothesis. Dependent on whether the objects are known, familiar or unknown, different methodologies for grasp inference apply. In this thesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed and open-loop manner showing the effectiveness of the proposed methods in real-world scenarios.

pdf [BibTex]

2011

pdf [BibTex]


no image
Iterative path integral stochastic optimal control: Theory and applications to motor control

Theodorou, E. A.

University of Southern California, University of Southern California, Los Angeles, CA, 2011 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Learning of grasp selection based on shape-templates

Herzog, A.

Karlsruhe Institute of Technology, 2011 (mastersthesis)

[BibTex]

[BibTex]

2009


Thumb xl synchro
Synchronized Oriented Mutations Algorithm for Training Neural Controllers

Berenz, V., Suzuki, K.

In Advances in Neuro-Information Processing: 15th International Conference, ICONIP 2008, Auckland, New Zealand, November 25-28, 2008, Revised Selected Papers, Part II, pages: 244-251, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009 (inbook)

link (url) DOI [BibTex]

2009

link (url) DOI [BibTex]


Thumb xl screen shot 2015 08 23 at 14.45.26
Integration of Visual Cues for Robotic Grasping

Bergström, N., Bohg, J., Kragic, D.

In Computer Vision Systems, 5815, pages: 245-254, Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2009 (incollection)

Abstract
In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


no image
Bayesian Methods for Autonomous Learning Systems (Phd Thesis)

Ting, J.

Department of Computer Science, University of Southern California, Los Angeles, CA, 2009, clmc (phdthesis)

PDF [BibTex]

PDF [BibTex]

1999


no image
Nonparametric regression for learning nonlinear transformations

Schaal, S.

In Prerational Intelligence in Strategies, High-Level Processes and Collective Behavior, 2, pages: 595-621, (Editors: Ritter, H.;Cruse, H.;Dean, J.), Kluwer Academic Publishers, 1999, clmc (inbook)

Abstract
Information processing in animals and artificial movement systems consists of a series of transformations that map sensory signals to intermediate representations, and finally to motor commands. Given the physical and neuroanatomical differences between individuals and the need for plasticity during development, it is highly likely that such transformations are learned rather than pre-programmed by evolution. Such self-organizing processes, capable of discovering nonlinear dependencies between different groups of signals, are one essential part of prerational intelligence. While neural network algorithms seem to be the natural choice when searching for solutions for learning transformations, this paper will take a more careful look at which types of neural networks are actually suited for the requirements of an autonomous learning system. The approach that we will pursue is guided by recent developments in learning theory that have linked neural network learning to well established statistical theories. In particular, this new statistical understanding has given rise to the development of neural network systems that are directly based on statistical methods. One family of such methods stems from nonparametric regression. This paper will compare nonparametric learning with the more widely used parametric counterparts in a non technical fashion, and investigate how these two families differ in their properties and their applicabilities. We will argue that nonparametric neural networks offer a set of characteristics that make them a very promising candidate for on-line learning in autonomous system.

link (url) [BibTex]

1999

link (url) [BibTex]

1992


no image
Informationssysteme mit CAD (Information systems within CAD)

Schaal, S.

In CAD/CAM Grundlagen, pages: 199-204, (Editors: Milberg, J.), Springer, Buchreihe CIM-TT. Berlin, 1992, clmc (inbook)

[BibTex]

1992

[BibTex]