Header logo is am


2019


Automated Generation of Reactive Programs from Human Demonstration for Orchestration of Robot Behaviors
Automated Generation of Reactive Programs from Human Demonstration for Orchestration of Robot Behaviors

Berenz, V., Bjelic, A., Mainprice, J.

ArXiv, 2019 (article)

Abstract
Social robots or collaborative robots that have to interact with people in a reactive way are difficult to program. This difficulty stems from the different skills required by the programmer: to provide an engaging user experience the behavior must include a sense of aesthetics while robustly operating in a continuously changing environment. The Playful framework allows composing such dynamic behaviors using a basic set of action and perception primitives. Within this framework, a behavior is encoded as a list of declarative statements corresponding to high-level sensory-motor couplings. To facilitate non-expert users to program such behaviors, we propose a Learning from Demonstration (LfD) technique that maps motion capture of humans directly to a Playful script. The approach proceeds by identifying the sensory-motor couplings that are active at each step using the Viterbi path in a Hidden Markov Model (HMM). Given these activation patterns, binary classifiers called evaluations are trained to associate activations to sensory data. Modularity is increased by clustering the sensory-motor couplings, leading to a hierarchical tree structure. The novelty of the proposed approach is that the learned behavior is encoded not in terms of trajectories in a task space, but as couplings between sensory information and high-level motor actions. This provides advantages in terms of behavioral generalization and reactivity displayed by the robot.

Support Video link (url) [BibTex]

2017


Interactive Perception: Leveraging Action in Perception and Perception in Action
Interactive Perception: Leveraging Action in Perception and Perception in Action

Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S., Sukhatme, G.

IEEE Transactions on Robotics, 33, pages: 1273-1291, December 2017 (article)

Abstract
Recent approaches in robotics follow the insight that perception is facilitated by interactivity with the environment. These approaches are subsumed under the term of Interactive Perception (IP). We argue that IP provides the following benefits: (i) any type of forceful interaction with the environment creates a new type of informative sensory signal that would otherwise not be present and (ii) any prior knowledge about the nature of the interaction supports the interpretation of the signal. This is facilitated by knowledge of the regularity in the combined space of sensory information and action parameters. The goal of this survey is to postulate this as a principle and collect evidence in support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of Interactive Perception. We close this survey by discussing the remaining open questions. Thereby, we hope to define a field and inspire future work.

arXiv DOI Project Page [BibTex]

2017

arXiv DOI Project Page [BibTex]


Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning
Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

Li, W., Bohg, J., Fritz, M.

arXiv, November 2017 (article) Submitted

Abstract
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.

arXiv [BibTex]


no image
Event-based State Estimation: An Emulation-based Approach

Trimpe, S.

IET Control Theory & Applications, 11(11):1684-1693, July 2017 (article)

Abstract
An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.

arXiv Supplementary material PDF DOI Project Page [BibTex]


Probabilistic Articulated Real-Time Tracking for Robot Manipulation
Probabilistic Articulated Real-Time Tracking for Robot Manipulation

(Best Paper of RA-L 2017, Finalist of Best Robotic Vision Paper Award of ICRA 2017)

Garcia Cifuentes, C., Issac, J., Wüthrich, M., Schaal, S., Bohg, J.

IEEE Robotics and Automation Letters (RA-L), 2(2):577-584, April 2017 (article)

Abstract
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.

arXiv video code and dataset video PDF DOI Project Page [BibTex]


no image
Anticipatory Action Selection for Human-Robot Table Tennis

Wang, Z., Boularias, A., Mülling, K., Schölkopf, B., Peters, J.

Artificial Intelligence, 247, pages: 399-414, 2017, Special Issue on AI and Robotics (article)

Abstract
Abstract Anticipation can enhance the capability of a robot in its interaction with humans, where the robot predicts the humans' intention for selecting its own action. We present a novel framework of anticipatory action selection for human-robot interaction, which is capable to handle nonlinear and stochastic human behaviors such as table tennis strokes and allows the robot to choose the optimal action based on prediction of the human partner's intention with uncertainty. The presented framework is generic and can be used in many human-robot interaction scenarios, for example, in navigation and human-robot co-manipulation. In this article, we conduct a case study on human-robot table tennis. Due to the limited amount of time for executing hitting movements, a robot usually needs to initiate its hitting movement before the opponent hits the ball, which requires the robot to be anticipatory based on visual observation of the opponent's movement. Previous work on Intention-Driven Dynamics Models (IDDM) allowed the robot to predict the intended target of the opponent. In this article, we address the problem of action selection and optimal timing for initiating a chosen action by formulating the anticipatory action selection as a Partially Observable Markov Decision Process (POMDP), where the transition and observation are modeled by the \{IDDM\} framework. We present two approaches to anticipatory action selection based on the \{POMDP\} formulation, i.e., a model-free policy learning method based on Least-Squares Policy Iteration (LSPI) that employs the \{IDDM\} for belief updates, and a model-based Monte-Carlo Planning (MCP) method, which benefits from the transition and observation model by the IDDM. Experimental results using real data in a simulated environment show the importance of anticipatory action selection, and that \{POMDPs\} are suitable to formulate the anticipatory action selection problem by taking into account the uncertainties in prediction. We also show that existing algorithms for POMDPs, such as \{LSPI\} and MCP, can be applied to substantially improve the robot's performance in its interaction with humans.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Robot Learning

Peters, J., Lee, D., Kober, J., Nguyen-Tuong, D., Bagnell, J., Schaal, S.

In Springer Handbook of Robotics, pages: 357-394, 15, 2nd, (Editors: Siciliano, Bruno and Khatib, Oussama), Springer International Publishing, 2017 (inbook)

Project Page [BibTex]

Project Page [BibTex]

2015


Gaussian Process Optimization for Self-Tuning Control
Gaussian Process Optimization for Self-Tuning Control

Marco, A.

Polytechnic University of Catalonia (BarcelonaTech), October 2015 (mastersthesis)

PDF Project Page [BibTex]

2015

PDF Project Page [BibTex]


no image
Adaptive and Learning Concepts in Hydraulic Force Control

Doerr, A.

University of Stuttgart, September 2015 (mastersthesis)

[BibTex]

[BibTex]


Object Detection Using Deep Learning - Learning where to search using visual attention
Object Detection Using Deep Learning - Learning where to search using visual attention

Kloss, A.

Eberhard Karls Universität Tübingen, May 2015 (mastersthesis)

Abstract
Detecting and identifying the different objects in an image fast and reliably is an important skill for interacting with one’s environment. The main problem is that in theory, all parts of an image have to be searched for objects on many different scales to make sure that no object instance is missed. It however takes considerable time and effort to actually classify the content of a given image region and both time and computational capacities that an agent can spend on classification are limited. Humans use a process called visual attention to quickly decide which locations of an image need to be processed in detail and which can be ignored. This allows us to deal with the huge amount of visual information and to employ the capacities of our visual system efficiently. For computer vision, researchers have to deal with exactly the same problems, so learning from the behaviour of humans provides a promising way to improve existing algorithms. In the presented master’s thesis, a model is trained with eye tracking data recorded from 15 participants that were asked to search images for objects from three different categories. It uses a deep convolutional neural network to extract features from the input image that are then combined to form a saliency map. This map provides information about which image regions are interesting when searching for the given target object and can thus be used to reduce the parts of the image that have to be processed in detail. The method is based on a recent publication of Kümmerer et al., but in contrast to the original method that computes general, task independent saliency, the presented model is supposed to respond differently when searching for different target categories.

PDF Project Page [BibTex]


Robot Arm Tracking with Random Decision Forests
Robot Arm Tracking with Random Decision Forests

Widmaier, F.

Eberhard-Karls-Universität Tübingen, May 2015 (mastersthesis)

Abstract
For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial for successful controlling its motion. Often, pose estimations can be acquired from encoders inside the arm, but they can have significant inaccuracy which makes the use of additional techniques necessary. In this master thesis, a novel approach of robot arm pose estimation is presented, that works on single depth images without the need of prior foreground segmentation or other preprocessing steps. A random regression forest is used, which is trained only on synthetically generated data. The approach improves former work by Bohg et al. by considerably reducing the computational effort both at training and test time. The forest in the new method directly estimates the desired joint angles while in the former approach, the forest casts 3D position votes for the joints, which then have to be clustered and fed into an iterative inverse kinematic process to finally get the joint angles. To improve the estimation accuracy, the standard training objective of the forest training is replaced by a specialized function that makes use of a model-dependent distance metric, called DISP. Experimental results show that the specialized objective indeed improves pose estimation and it is shown that the method, despite of being trained on synthetic data only, is able to provide reasonable estimations for real data at test time.

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Lernende Roboter

Trimpe, S.

In Jahrbuch der Max-Planck-Gesellschaft, Max Planck Society, May 2015, (popular science article in German) (inbook)

link (url) [BibTex]

link (url) [BibTex]


no image
Autonomous Robots

Schaal, S.

In Jahrbuch der Max-Planck-Gesellschaft, May 2015 (incollection)

[BibTex]

[BibTex]


no image
Policy Search for Imitation Learning

Doerr, A.

University of Stuttgart, January 2015 (thesis)

link (url) Project Page [BibTex]


Sensory synergy as environmental input integration
Sensory synergy as environmental input integration

Alnajjar, F., Itkonen, M., Berenz, V., Tournier, M., Nagai, C., Shimoda, S.

Frontiers in Neuroscience, 8, pages: 436, 2015 (article)

Abstract
The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Active Reward Learning with a Novel Acquisition Function

Daniel, C., Kroemer, O., Viering, M., Metz, J., Peters, J.

Autonomous Robots, 39(3):389-405, 2015 (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning Movement Primitive Attractor Goals and Sequential Skills from Kinesthetic Demonstrations

Manschitz, S., Kober, J., Gienger, M., Peters, J.

Robotics and Autonomous Systems, 74, Part A, pages: 97-107, 2015 (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Bayesian Optimization for Learning Gaits under Uncertainty

Calandra, R., Seyfarth, A., Peters, J., Deisenroth, M.

Annals of Mathematics and Artificial Intelligence, pages: 1-19, 2015 (article)

DOI [BibTex]

DOI [BibTex]


Tacit Learning for Emergence of Task-Related Behaviour through Signal Accumulation
Tacit Learning for Emergence of Task-Related Behaviour through Signal Accumulation

Berenz, V., Alnajjar, F., Hayashibe, M., Shimoda, S.

In Emergent Trends in Robotics and Intelligent Systems: Where is the Role of Intelligent Technologies in the Next Generation of Robots?, pages: 31-38, Springer International Publishing, Cham, 2015 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Robot Learning

Peters, J., Lee, D., Kober, J., Nguyen-Tuong, D., Bagnell, J. A., Schaal, S.

In Springer Handbook of Robotics 2nd Edition, pages: 1371-1394, Springer Berlin Heidelberg, Berlin, Heidelberg, 2015 (incollection)

[BibTex]

[BibTex]

2001


no image
Synchronized robot drumming by neural oscillator

Kotosaka, S., Schaal, S.

Journal of the Robotics Society of Japan, 19(1):116-123, 2001, clmc (article)

Abstract
Sensory-motor integration is one of the key issues in robotics. In this paper, we propose an approach to rhythmic arm movement control that is synchronized with an external signal based on exploiting a simple neural oscillator network. Trajectory generation by the neural oscillator is a biologically inspired method that can allow us to generate a smooth and continuous trajectory. The parameter tuning of the oscillators is used to generate a synchronized movement with wide intervals. We adopted the method for the drumming task as an example task. By using this method, the robot can realize synchronized drumming with wide drumming intervals in real time. The paper also shows the experimental results of drumming by a humanoid robot.

[BibTex]

2001

[BibTex]


no image
Origins and violations of the 2/3 power law in rhythmic 3D movements

Schaal, S., Sternad, D.

Experimental Brain Research, 136, pages: 60-72, 2001, clmc (article)

Abstract
The 2/3 power law, the nonlinear relationship between tangential velocity and radius of curvature of the endeffector trajectory, has been suggested as a fundamental constraint of the central nervous system in the formation of rhythmic endpoint trajectories. However, studies on the 2/3 power law have largely been confined to planar drawing patterns of relatively small size. With the hypothesis that this strategy overlooks nonlinear effects that are constitutive in movement generation, the present experiments tested the validity of the power law in elliptical patterns which were not confined to a planar surface and which were performed by the unconstrained 7-DOF arm with significant variations in pattern size and workspace orientation. Data were recorded from five human subjects where the seven joint angles and the endpoint trajectories were analyzed. Additionally, an anthropomorphic 7-DOF robot arm served as a "control subject" whose endpoint trajectories were generated on the basis of the human joint angle data, modeled as simple harmonic oscillations. Analyses of the endpoint trajectories demonstrate that the power law is systematically violated with increasing pattern size, in both exponent and the goodness of fit. The origins of these violations can be explained analytically based on smooth rhythmic trajectory formation and the kinematic structure of the human arm. We conclude that in unconstrained rhythmic movements, the power law seems to be a by-product of a movement system that favors smooth trajectories, and that it is unlikely to serve as a primary movement generating principle. Our data rather suggests that subjects employed smooth oscillatory pattern generators in joint space to realize the required movement patterns.

link (url) [BibTex]

link (url) [BibTex]


no image
Graph-matching vs. entropy-based methods for object detection
Neural Networks, 14(3):345-354, 2001, clmc (article)

Abstract
Labeled Graph Matching (LGM) has been shown successful in numerous ob-ject vision tasks. This method is the basis for arguably the best face recognition system in the world. We present an algorithm for visual pattern recognition that is an extension of LGM ("LGM+"). We compare the performance of LGM and LGM+ algorithms with a state of the art statistical method based on Mutual Information Maximization (MIM). We present an adaptation of the MIM method for multi-dimensional Gabor wavelet features. The three pattern recognition methods were evaluated on an object detection task, using a set of stimuli on which none of the methods had been tested previously. The results indicate that while the performance of the MIM method operating upon Gabor wavelets is superior to the same method operating on pixels and to LGM, it is surpassed by LGM+. LGM+ offers a significant improvement in performance over LGM without losing LGMâ??s virtues of simplicity, biological plausibility, and a computational cost that is 2-3 orders of magnitude lower than that of the MIM algorithm. 

link (url) [BibTex]

link (url) [BibTex]


no image
Biomimetic gaze stabilization based on feedback-error learning with nonparametric regression networks

Shibata, T., Schaal, S.

Neural Networks, 14(2):201-216, 2001, clmc (article)

Abstract
Oculomotor control in a humanoid robot faces similar problems as biological oculomotor systems, i.e. the stabilization of gaze in face of unknown perturbations of the body, selective attention, stereo vision, and dealing with large information processing delays. Given the nonlinearities of the geometry of binocular vision as well as the possible nonlinearities of the oculomotor plant, it is desirable to accomplish accurate control of these behaviors through learning approaches. This paper develops a learning control system for the phylogenetically oldest behaviors of oculomotor control, the stabilization reflexes of gaze. In a step-wise procedure, we demonstrate how control theoretic reasonable choices of control components result in an oculomotor control system that resembles the known functional anatomy of the primate oculomotor system. The core of the learning system is derived from the biologically inspired principle of feedback-error learning combined with a state-of-the-art non-parametric statistical learning network. With this circuitry, we demonstrate that our humanoid robot is able to acquire high performance visual stabilization reflexes after about 40 s of learning despite significant nonlinearities and processing delays in the system.

link (url) [BibTex]


no image
Fast learning of biomimetic oculomotor control with nonparametric regression networks (in Japanese)

Shibata, T., Schaal, S.

Journal of the Robotics Society of Japan, 19(4):468-479, 2001, clmc (article)

[BibTex]

[BibTex]


no image
Bouncing a ball: Tuning into dynamic stability

Sternad, D., Duarte, M., Katsumata, H., Schaal, S.

Journal of Experimental Psychology: Human Perception and Performance, 27(5):1163-1184, 2001, clmc (article)

Abstract
Rhythmically bouncing a ball with a racket was investigated and modeled with a nonlinear map. Model analyses provided a variable defining a dynamically stable solution that obviates computationally expensive corrections. Three experiments evaluated whether dynamic stability is optimized and what perceptual support is necessary for stable behavior. Two hypotheses were tested: (a) Performance is stable if racket acceleration is negative at impact, and (b) variability is lowest at an impact acceleration between -4 and -1 m/s2. In Experiment 1 participants performed the task, eyes open or closed, bouncing a ball confined to a 1-dimensional trajectory. Experiment 2 eliminated constraints on racket and ball trajectory. Experiment 3 excluded visual or haptic information. Movements were performed with negative racket accelerations in the range of highest stability. Performance with eyes closed was more variable, leaving acceleration unaffected. With haptic information, performance was more stable than with visual information alone.

[BibTex]

[BibTex]


no image
Biomimetic oculomotor control

Shibata, T., Vijayakumar, S., Conradt, J., Schaal, S.

Adaptive Behavior, 9(3/4):189-207, 2001, clmc (article)

Abstract
Oculomotor control in a humanoid robot faces similar problems as biological oculomotor systems, i.e., capturing targets accurately on a very narrow fovea, dealing with large delays in the control system, the stabilization of gaze in face of unknown perturbations of the body, selective attention, and the complexity of stereo vision. In this paper, we suggest control circuits to realize three of the most basic oculomotor behaviors and their integration - the vestibulo-ocular and optokinetic reflex (VOR-OKR) for gaze stabilization, smooth pursuit for tracking moving objects, and saccades for overt visual attention. Each of these behaviors and the mechanism for their integration was derived with inspiration from computational theories as well as behavioral and physiological data in neuroscience. Our implementations on a humanoid robot demonstrate good performance of the oculomotor behaviors, which proves to be a viable strategy to explore novel control mechanisms for humanoid robotics. Conversely, insights gained from our models have been able to directly influence views and provide new directions for computational neuroscience research.

link (url) [BibTex]

link (url) [BibTex]

1995


no image
Batting a ball: Dynamics of a rhythmic skill

Sternad, D., Schaal, S., Atkeson, C. G.

In Studies in Perception and Action, pages: 119-122, (Editors: Bardy, B.;Bostma, R.;Guiard, Y.), Erlbaum, Hillsdayle, NJ, 1995, clmc (inbook)

[BibTex]

1995

[BibTex]


no image
Memory-based neural networks for robot learning

Atkeson, C. G., Schaal, S.

Neurocomputing, 9, pages: 1-27, 1995, clmc (article)

Abstract
This paper explores a memory-based approach to robot learning, using memory-based neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memory-based, robot learning, locally weighted regression, nearest neighbor, local models.

link (url) [BibTex]

link (url) [BibTex]

1991


no image
Ways to smarter CAD-systems

Ehrlenspiel, K., Schaal, S.

In Proceedings of ICED’91Heurista, pages: 10-16, (Editors: Hubka), Edition, Schriftenreihe WDK 21. Zürich, 1991, clmc (inbook)

[BibTex]

1991

[BibTex]