Department Talks
  • Christian Ebenbauer
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

In many control applications it is the goal to operate a dynamical system in an optimal way with respect to a certain performance criterion. In a combustion engine, for example, the goal could be to control the engine such that the emissions are minimized. Due to the complexity of an engine, the desired operating point is unknown or may even change over time so that it cannot be determined a priori. Extremum seeking control is a learning-control methodology to solve such kind of control problems. It is a model-free method that optimizes the steady-state behavior of a dynamical system. Since it can be implemented with very limited resources, it has found several applications in industry. In this talk we give an introduction to extremum seeking theory based on a recently developed framework which relies on tools from geometric control. Furthermore, we discuss how this framework can be utilized to solve distributed optimization and coordination problems in multi-agent systems.

Organizers: Sebastian Trimpe


Safe Learning Control for Mobile Robots

IS Colloquium
  • 25 April 2016 • 11:15 12:15
  • Angela Schoellig
  • Max Planck Haus Lecture Hall

In the last decade, there has been a major shift in the perception, use and predicted applications of robots. In contrast to their early industrial counterparts, robots are envisioned to operate in increasingly complex and uncertain environments, alongside humans, and over long periods of time. In my talk, I will argue that machine learning is indispensable in order for this new generation of robots to achieve high performance. Based on various examples (and videos) ranging from aerial-vehicle dancing to ground-vehicle racing, I will demonstrate the effect of robot learning, and highlight how our learning algorithms intertwine model-based control with machine learning. In particular, I will focus on our latest work that provides guarantees during learning (for example, safety and robustness guarantees) by combining traditional controls methods (nonlinear, robust and model predictive control) with Gaussian process regression.

Organizers: Sebastian Trimpe


  • Felix Berkenkamp
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Bayesian optimization is a powerful tool that has been successfully used to automatically optimize the parameters of a fixed control policy. It has many desirable properties, such as data-efficiently and being able to handle noisy measurements. However, standard Bayesian optimization does not consider any constraints imposed by the real system, which limits its applications to highly controlled environments. In this talk, I will introduce an extension of this framework, which additionally considers multiple safety constraints during the optimization process. This method enables safe parameter optimization by only evaluating parameters that fulfill all safety constraints with high probability. I will show several experiments on a quadrotor vehicle which demonstrate the method. Lastly, I will briefly talk about how the ideas behind safe Bayesian optimization can be used to safely explore unknown environments (MDPs).

Organizers: Sebastian Trimpe


  • Jun Nakanishi
  • TTR, AMD Seminar Room (first floor)

Understanding the principles of natural movement generation has been and continues to be one of the most interesting and important open problems in the fields of robotics and neural control of movement. In this talk, I introduce an overview of our previous work on the control of dynamic movements in robotic systems towards the goal of control design principles and understanding of motion generation. Our research has focused in the fields of dynamical systems theory, adaptive and optimal control and statistical learning, and their application to robotics towards achieving dynamically dexterous behavior in robotic systems. First, our studies on dynamical systems based task encoding in robot brachiation, movement primitives for imitation learning, and oscillator based biped locomotion control will be presented. Then, our recent work on optimal control of robotic systems with variable stiffness actuation will be introduced towards the aim of achieving highly dynamic movements by exploiting the natural dynamics of the system. Finally, our new humanoid robot H-1 at TUM-ICS will be introduced.

Organizers: Ludovic Righetti


  • Alexander Sprowitz
  • TTR, AMD Seminar Room (first floor)

The current performance gap between legged animals and legged robots is large. Animals can reach high locomotion speed in complex terrain, or run at a low cost of transport. They are able to rapidly sense their environment, process sensor data, learn and plan locomotion strategies, and execute feedforward and feedback controlled locomotion patterns fluently on the fly. Animals use hardware that has, compared to the latest man-made actuators, electronics, and processors, relatively low bandwidth, medium power density, and low speed. The most common approach to legged robot locomotion is still assuming rigid linkage hardware, high torque actuators, and model based control algorithms with high bandwidth and high gain feedback mechanisms. State of the art robotic demonstrations such as the 2015 DARPA challenge showed that seemingly trivial locomotion tasks such as level walking, or walking over soft sand still stops most of our biped and quadruped robots. This talk focuses on an alternative class of legged robots and control algorithms designed and implemented on several quadruped and biped platforms, for a new generation of legged robotic systems. Biomechanical blueprints inspired by nature, and mechanisms from locomotion neurocontrol were designed, tested, and can be compared to their biological counterparts. We focus on hardware and controllers that allow comparably cheap robotics, in terms of computation, control, and mechanical complexity. Our goal are highly dynamic, robust legged systems with low weight and inertia, relatively low mechanical complexity and cost of transport, and little computational demands for standard locomotion tasks. Ideally, such system can also be used as testing platforms to explain not yet understood biomechanical and neurocontrol aspects of animals.

Organizers: Ludovic Righetti


Making Robots Learn

IS Colloquium
  • 13 November 2015 • 11:30 12:30
  • Prof. Pieter Abbeel
  • Max Planck House Tübingen, Lecture Hall

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming task specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.

Organizers: Stefan Schaal


  • Yasemin Bekiroglu
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Unknown information required to plan grasps such as object shape and pose needs to be extracted from the environment through sensors. However, sensory measurements are noisy and associated with a degree of uncertainty. Furthermore, object parameters relevant to grasp planning may not be accurately estimated, e.g., friction and mass. In real-world settings, these issues can lead to grasp failures with serious consequences. I will talk about learning approaches using real sensory data, e.g., visual and tactile, to assess grasp success (discriminative and generative) that can be used to trigger plan corrections. I will also present a probabilistic approach for learning object models based on visual and tactile data through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape.

Organizers: Jeannette Bohg


  • Anna Belardinelli
  • Max Planck House Lecture Hall

Our eyes typically anticipate the next action module in a sequence, by targeting the relevant object for the following step. Yet, how the final goal, or the way we intend to achieve it, is reflected in the early visual exploration of each object has been less investigated. In a series of experiments we considered how scan paths on real-world objects would be affected by different factors such as task, object orientation, familiarity, or low-level saliency, hence revealing which components can account for fixation target selection during eye-hand coordination. In each experiment, the fixation distribution differed significantly depending on the final task (e.g. lifting vs. opening). Already from the second fixation prior to reaching the object the eyes targeted the task-relevant regions and these significantly correlated with salient features like oriented edges. Familiarity had a significant effect when different tools were used as stimuli, with more fixations concentrating on the active end of unfamiliar tools. Object orientation (upright or inverse) and anticipation of the final comfort state determined the height of the fixations on the objects. Scan paths dynamics, thus, reveal how action is planned, offering indirect insight in the structuring of complex behaviour and the understanding of how task and affordance perception relates to motor control.

Organizers: Jeannette Bohg


Autonomous Systems At Moog

Talk
  • 06 July 2015 • 14:00 15:00
  • Gonzalo Rey
  • AMD Seminar Room

The talk will briefly introduce Moog Inc. It will then describe Moog's view of its value proposition to robotics and autonomous systems. If robots and autonomous system are to achieve their enormous potential to positively impact the world economy, the technology has to achieve equivalent the levels of robustness, availability, reliability and safety that are expected from current solutions. The commercial aircraft industry has seen an order of magnitude increase in machine complexity in the last fifty years in order to reach the highest ever levels of cost per seat-mile and safety in its history. Today one can travel cheaper and safer than ever. Moog believes that there are opportunities to apply the methodologies and principles that enabled the lowest ever costs while at the same time managing the highest ever complexity and safety levels for aircraft to robotics and autonomous systems. The talk will briefly describe the type of approaches used in aircraft to achieve such low levels of failures that are hard to comprehend (or believe for those not familiar with the engineering approach), while at the same time, relying on low cost commercial off the shelf components in electronics, materials and manufacturing processes. Next the talk will move onto a couple of active research projects Moog is engaged in with ETHZ and IIT. Finally, it will give an overview of an emerging research effort in certification of advanced (robot) control laws.

Organizers: Ludovic Righetti


  • Andre Seyfarth
  • MRZ Seminar Room

In this presentation a series of conceptual models for describing human and animal locomotion will be presented ranging from standing to walking and running. By subsequently increasing the complexity of the models we show that basic properties of the underlying spring-mass model can be inherited by the more detailed models. Model extensions include the consideration of a rigid trunk (instead of a point mass), non-elastic leg properties (instead of a mass-less leg spring), additional legs (two and four legs), leg masses, leg segments (e.g. a compliantly attached foot) and energy management protocols. Furthermore we propose a methodology to evaluate and refine conceptual models based on the test trilogy. This approach consists of a simulation test, a hardware test and a behavioral comparison of biological experiments with model predictions and hardware models.