Department Talks

Optical Robot Skin and Whole Body Vision

Talk
  • 19 October 2016 • 14:00 15:00
  • Chris Atkeson and Akihiko Yamaguchi
  • Max Planck House, Lecture Hall

Chris Atkeson will talk about the motivation for optical robot skin and whole-body vision. Akihiko Yamaguchi will talk about a first application, FingerVision.

Organizers: Ludovic Righetti


  • Jose R. Medina
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Control under uncertainty is an omnipresent problem in robotics that typically arises when robots must cope with unknown environments/tasks. Robot control typically ignores uncertainty by considering only the expected outcomes of the robot’s internal model. Interestingly, neuroscientist have shown that humans adapt their decisions depending on the level of uncertainty which is not reflected in the expected values, but in higher order statistics. In this talk I will first present an approach to systematically address this problem in the context of stochastic optimal control. I will then give an example of how the robot’s internal model structure defines the level uncertainty and its distribution. Finally, experiments in a physical human-robot interaction setting will illustrate the capabilities of this approach.

Organizers: Ludovic Righetti


  • Stéphane Caron
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Humanoid locomotion on horizontal floors was solved by closing the feedback loop on the Zero-tiling Moment Point (ZMP), a measurable dynamic point that needs to stay inside the foot contact area to prevent the robot from falling (contact stability criterion). However, this criterion does not apply to general multi-contact settings, the "new frontier" in humanoid locomotion. In this talk, we will see how the ideas of ZMP and support area can be generalized and applied to multi-contact locomotion. First, we will show how support areas can be calculated in any virtual plane, allowing one to apply classical schemes even when contacts are not coplanar. Yet, these schemes constraint the center-of-mass (COM) to planar motions. We overcome this limitation by extending the calculation of the contact-stability criterion from a support area to a support cone of 3D COM accelerations. We use this new criterion to implement a multi-contact walking pattern generator based on predictive control of COM accelerations, which we will demonstrate in real-time simulations during the presentation.

Organizers: Ludovic Righetti


  • Christian Ebenbauer
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

In many control applications it is the goal to operate a dynamical system in an optimal way with respect to a certain performance criterion. In a combustion engine, for example, the goal could be to control the engine such that the emissions are minimized. Due to the complexity of an engine, the desired operating point is unknown or may even change over time so that it cannot be determined a priori. Extremum seeking control is a learning-control methodology to solve such kind of control problems. It is a model-free method that optimizes the steady-state behavior of a dynamical system. Since it can be implemented with very limited resources, it has found several applications in industry. In this talk we give an introduction to extremum seeking theory based on a recently developed framework which relies on tools from geometric control. Furthermore, we discuss how this framework can be utilized to solve distributed optimization and coordination problems in multi-agent systems.

Organizers: Sebastian Trimpe


Safe Learning Control for Mobile Robots

IS Colloquium
  • 25 April 2016 • 11:15 12:15
  • Angela Schoellig
  • Max Planck Haus Lecture Hall

In the last decade, there has been a major shift in the perception, use and predicted applications of robots. In contrast to their early industrial counterparts, robots are envisioned to operate in increasingly complex and uncertain environments, alongside humans, and over long periods of time. In my talk, I will argue that machine learning is indispensable in order for this new generation of robots to achieve high performance. Based on various examples (and videos) ranging from aerial-vehicle dancing to ground-vehicle racing, I will demonstrate the effect of robot learning, and highlight how our learning algorithms intertwine model-based control with machine learning. In particular, I will focus on our latest work that provides guarantees during learning (for example, safety and robustness guarantees) by combining traditional controls methods (nonlinear, robust and model predictive control) with Gaussian process regression.

Organizers: Sebastian Trimpe


  • Felix Berkenkamp
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Bayesian optimization is a powerful tool that has been successfully used to automatically optimize the parameters of a fixed control policy. It has many desirable properties, such as data-efficiently and being able to handle noisy measurements. However, standard Bayesian optimization does not consider any constraints imposed by the real system, which limits its applications to highly controlled environments. In this talk, I will introduce an extension of this framework, which additionally considers multiple safety constraints during the optimization process. This method enables safe parameter optimization by only evaluating parameters that fulfill all safety constraints with high probability. I will show several experiments on a quadrotor vehicle which demonstrate the method. Lastly, I will briefly talk about how the ideas behind safe Bayesian optimization can be used to safely explore unknown environments (MDPs).

Organizers: Sebastian Trimpe


  • Jun Nakanishi
  • TTR, AMD Seminar Room (first floor)

Understanding the principles of natural movement generation has been and continues to be one of the most interesting and important open problems in the fields of robotics and neural control of movement. In this talk, I introduce an overview of our previous work on the control of dynamic movements in robotic systems towards the goal of control design principles and understanding of motion generation. Our research has focused in the fields of dynamical systems theory, adaptive and optimal control and statistical learning, and their application to robotics towards achieving dynamically dexterous behavior in robotic systems. First, our studies on dynamical systems based task encoding in robot brachiation, movement primitives for imitation learning, and oscillator based biped locomotion control will be presented. Then, our recent work on optimal control of robotic systems with variable stiffness actuation will be introduced towards the aim of achieving highly dynamic movements by exploiting the natural dynamics of the system. Finally, our new humanoid robot H-1 at TUM-ICS will be introduced.

Organizers: Ludovic Righetti


  • Alexander Sprowitz
  • TTR, AMD Seminar Room (first floor)

The current performance gap between legged animals and legged robots is large. Animals can reach high locomotion speed in complex terrain, or run at a low cost of transport. They are able to rapidly sense their environment, process sensor data, learn and plan locomotion strategies, and execute feedforward and feedback controlled locomotion patterns fluently on the fly. Animals use hardware that has, compared to the latest man-made actuators, electronics, and processors, relatively low bandwidth, medium power density, and low speed. The most common approach to legged robot locomotion is still assuming rigid linkage hardware, high torque actuators, and model based control algorithms with high bandwidth and high gain feedback mechanisms. State of the art robotic demonstrations such as the 2015 DARPA challenge showed that seemingly trivial locomotion tasks such as level walking, or walking over soft sand still stops most of our biped and quadruped robots. This talk focuses on an alternative class of legged robots and control algorithms designed and implemented on several quadruped and biped platforms, for a new generation of legged robotic systems. Biomechanical blueprints inspired by nature, and mechanisms from locomotion neurocontrol were designed, tested, and can be compared to their biological counterparts. We focus on hardware and controllers that allow comparably cheap robotics, in terms of computation, control, and mechanical complexity. Our goal are highly dynamic, robust legged systems with low weight and inertia, relatively low mechanical complexity and cost of transport, and little computational demands for standard locomotion tasks. Ideally, such system can also be used as testing platforms to explain not yet understood biomechanical and neurocontrol aspects of animals.

Organizers: Ludovic Righetti


Making Robots Learn

IS Colloquium
  • 13 November 2015 • 11:30 12:30
  • Prof. Pieter Abbeel
  • Max Planck House Tübingen, Lecture Hall

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming task specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.

Organizers: Stefan Schaal


  • Yasemin Bekiroglu
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Unknown information required to plan grasps such as object shape and pose needs to be extracted from the environment through sensors. However, sensory measurements are noisy and associated with a degree of uncertainty. Furthermore, object parameters relevant to grasp planning may not be accurately estimated, e.g., friction and mass. In real-world settings, these issues can lead to grasp failures with serious consequences. I will talk about learning approaches using real sensory data, e.g., visual and tactile, to assess grasp success (discriminative and generative) that can be used to trigger plan corrections. I will also present a probabilistic approach for learning object models based on visual and tactile data through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape.

Organizers: Jeannette Bohg