Header logo is am


2015


Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor
Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor

Su, Z., Hausman, K., Chebotar, Y., Molchanov, A., Loeb, G. E., Sukhatme, G. S., Schaal, S.

In IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages: 297-303, 2015 (inproceedings)

link (url) [BibTex]

2015

link (url) [BibTex]


Policy Learning with Hypothesis Based Local Action Selection
Policy Learning with Hypothesis Based Local Action Selection

Sankaran, B., Bohg, J., Ratliff, N., Schaal, S.

In Reinforcement Learning and Decision Making, 2015 (inproceedings)

Abstract
For robots to be able to manipulate in unknown and unstructured environments the robot should be capable of operating under partial observability of the environment. Object occlusions and unmodeled environments are some of the factors that result in partial observability. A common scenario where this is encountered is manipulation in clutter. In the case that the robot needs to locate an object of interest and manipulate it, it needs to perform a series of decluttering actions to accurately detect the object of interest. To perform such a series of actions, the robot also needs to account for the dynamics of objects in the environment and how they react to contact. This is a non trivial problem since one needs to reason not only about robot-object interactions but also object-object interactions in the presence of contact. In the example scenario of manipulation in clutter, the state vector would have to account for the pose of the object of interest and the structure of the surrounding environment. The process model would have to account for all the aforementioned robot-object, object-object interactions. The complexity of the process model grows exponentially as the number of objects in the scene increases. This is commonly the case in unstructured environments. Hence it is not reasonable to attempt to model all object-object and robot-object interactions explicitly. Under this setting we propose a hypothesis based action selection algorithm where we construct a hypothesis set of the possible poses of an object of interest given the current evidence in the scene and select actions based on our current set of hypothesis. This hypothesis set tends to represent the belief about the structure of the environment and the number of poses the object of interest can take. The agent's only stopping criterion is when the uncertainty regarding the pose of the object is fully resolved.

Web Project Page [BibTex]


no image
Learning Optimal Striking Points for A Ping-Pong Playing Robot

Huang, Y., Schölkopf, B., Peters, J.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages: 4587-4592, IROS, 2015 (inproceedings)

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Model-Based Relative Entropy Stochastic Search

Abdolmaleki, A., Peters, J., Neumann, G.

In Advances in Neural Information Processing Systems 28, pages: 3523-3531, (Editors: C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama and R. Garnett), Curran Associates, Inc., 29th Annual Conference on Neural Information Processing Systems (NIPS), 2015 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
Modeling Spatio-Temporal Variability in Human-Robot Interaction with Probabilistic Movement Primitives

Ewerton, M., Neumann, G., Lioutikov, R., Ben Amor, H., Peters, J., Maeda, G.

In Workshop on Machine Learning for Social Robotics, ICRA, 2015 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


no image
Extracting Low-Dimensional Control Variables for Movement Primitives

Rueckert, E., Mundo, J., Paraschos, A., Peters, J., Neumann, G.

In IEEE International Conference on Robotics and Automation, pages: 1511-1518, ICRA, 2015 (inproceedings)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A New Perspective and Extension of the Gaussian Filter

Wüthrich, M., Trimpe, S., Kappler, D., Schaal, S.

In Robotics: Science and Systems, 2015 (inproceedings)

Abstract
The Gaussian Filter (GF) is one of the most widely used filtering algorithms; instances are the Extended Kalman Filter, the Unscented Kalman Filter and the Divided Difference Filter. GFs represent the belief of the current state by a Gaussian with the mean being an affine function of the measurement. We show that this representation can be too restrictive to accurately capture the dependencies in systems with nonlinear observation models, and we investigate how the GF can be generalized to alleviate this problem. To this end we view the GF from a variational-inference perspective, and analyze how restrictions on the form of the belief can be relaxed while maintaining simplicity and efficiency. This analysis provides a basis for generalizations of the GF. We propose one such generalization which coincides with a GF using a virtual measurement, obtained by applying a nonlinear function to the actual measurement. Numerical experiments show that the proposed Feature Gaussian Filter (FGF) can have a substantial performance advantage over the standard GF for systems with nonlinear observation models.

Web PDF Project Page [BibTex]


no image
Learning multiple collaborative tasks with a mixture of Interaction Primitives

Ewerton, M., Neumann, G., Lioutikov, R., Ben Amor, H., Peters, J., Maeda, G.

In IEEE International Conference on Robotics and Automation, pages: 1535-1542, ICRA, 2015 (inproceedings)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Whole-body motor strategies for balancing on a beam when changing the number of available degrees of freedom

Chiovetto, E, Huber, M, Righetti, L., Schaal, S., Sternad, D, Giese, M.

In Progress in Motor Control X, Budapest, Hungry, 2015 (inproceedings)

[BibTex]

[BibTex]


no image
From Humans to Robots and Back: Role of Arm Movement in Medio-lateral Balance Control

Huber, M, Chiovetto, E, Schaal, S., Giese, M., Sternad, D

In Annual Meeting of Neural Control of Movement, Charleston, NC, 2015 (inproceedings)

[BibTex]

[BibTex]


no image
Trajectory generation for multi-contact momentum control

Herzog, A., Rotella, N., Schaal, S., Righetti, L.

In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pages: 874-880, IEEE, Seoul, South Korea, 2015 (inproceedings)

Abstract
Simplified models of the dynamics such as the linear inverted pendulum model (LIPM) have proven to perform well for biped walking on flat ground. However, for more complex tasks the assumptions of these models can become limiting. For example, the LIPM does not allow for the control of contact forces independently, is limited to co-planar contacts and assumes that the angular momentum is zero. In this paper, we propose to use the full momentum equations of a humanoid robot in a trajectory optimization framework to plan its center of mass, linear and angular momentum trajectories. The model also allows for planning desired contact forces for each end-effector in arbitrary contact locations. We extend our previous results on linear quadratic regulator (LQR) design for momentum control by computing the (linearized) optimal momentum feedback law in a receding horizon fashion. The resulting desired momentum and the associated feedback law are then used in a hierarchical whole body control approach. Simulation experiments show that the approach is computationally fast and is able to generate plans for locomotion on complex terrains while demonstrating good tracking performance for the full humanoid control.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Humanoid Momentum Estimation Using Sensed Contact Wrenches

Rotella, N., Herzog, A., Schaal, S., Righetti, L.

In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pages: 556-563, IEEE, Seoul, South Korea, 2015 (inproceedings)

Abstract
This work presents approaches for the estimation of quantities important for the control of the momentum of a humanoid robot. In contrast to previous approaches which use simplified models such as the Linear Inverted Pendulum Model, we present estimators based on the momentum dynamics of the robot. By using this simple yet dynamically-consistent model, we avoid the issues of using simplified models for estimation. We develop an estimator for the center of mass and full momentum which can be reformulated to estimate center of mass offsets as well as external wrenches applied to the robot. The observability of these estimators is investigated and their performance is evaluated in comparison to previous approaches.

link (url) DOI [BibTex]

link (url) DOI [BibTex]

1997


no image
Learning from demonstration

Schaal, S.

In Advances in Neural Information Processing Systems 9, pages: 1040-1046, (Editors: Mozer, M. C.;Jordan, M.;Petsche, T.), MIT Press, Cambridge, MA, 1997, clmc (inproceedings)

Abstract
By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and/or demonstrations of other humans. For learning control, this paper investigates how learning from demonstration can be applied in the context of reinforcement learning. We consider priming the Q-function, the value function, the policy, and the model of the task dynamics as possible areas where demonstrations can speed up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing on a complex anthropomorphic robot arm, we demonstrate that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems. Using the suggested methods, the robot learns pole balancing in just a single trial after a 30 second long demonstration of the human instructor. 

link (url) [BibTex]

1997

link (url) [BibTex]


no image
Robot learning from demonstration

Atkeson, C. G., Schaal, S.

In Machine Learning: Proceedings of the Fourteenth International Conference (ICML ’97), pages: 12-20, (Editors: Fisher Jr., D. H.), Morgan Kaufmann, Nashville, TN, July 8-12, 1997, 1997, clmc (inproceedings)

Abstract
The goal of robot learning from demonstration is to have a robot learn from watching a demonstration of the task to be performed. In our approach to learning from demonstration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task. A policy is computed based on the learned reward function and task model. Lessons learned from an implementation on an anthropomorphic robot arm using a pendulum swing up task include 1) simply mimicking demonstrated motions is not adequate to perform this task, 2) a task planner can use a learned model and reward function to compute an appropriate policy, 3) this model-based planning process supports rapid learning, 4) both parametric and nonparametric models can be learned and used, and 5) incorporating a task level direct learning component, which is non-model-based, in addition to the model-based planner, is useful in compensating for structural modeling errors and slow model learning. 

link (url) [BibTex]

link (url) [BibTex]


no image
Local dimensionality reduction for locally weighted learning

Vijayakumar, S., Schaal, S.

In International Conference on Computational Intelligence in Robotics and Automation, pages: 220-225, Monteray, CA, July10-11, 1997, 1997, clmc (inproceedings)

Abstract
Incremental learning of sensorimotor transformations in high dimensional spaces is one of the basic prerequisites for the success of autonomous robot devices as well as biological movement systems. So far, due to sparsity of data in high dimensional spaces, learning in such settings requires a significant amount of prior knowledge about the learning task, usually provided by a human expert. In this paper we suggest a partial revision of the view. Based on empirical studies, it can been observed that, despite being globally high dimensional and sparse, data distributions from physical movement systems are locally low dimensional and dense. Under this assumption, we derive a learning algorithm, Locally Adaptive Subspace Regression, that exploits this property by combining a local dimensionality reduction as a preprocessing step with a nonparametric learning technique, locally weighted regression. The usefulness of the algorithm and the validity of its assumptions are illustrated for a synthetic data set and data of the inverse dynamics of an actual 7 degree-of-freedom anthropomorphic robot arm.

link (url) [BibTex]

link (url) [BibTex]


no image
Learning tasks from a single demonstration

Atkeson, C. G., Schaal, S.

In IEEE International Conference on Robotics and Automation (ICRA97), 2, pages: 1706-1712, Piscataway, NJ: IEEE, Albuquerque, NM, 20-25 April, 1997, clmc (inproceedings)

Abstract
Learning a complex dynamic robot manoeuvre from a single human demonstration is difficult. This paper explores an approach to learning from demonstration based on learning an optimization criterion from the demonstration and a task model from repeated attempts to perform the task, and using the learned criterion and model to compute an appropriate robot movement. A preliminary version of the approach has been implemented on an anthropomorphic robot arm using a pendulum swing up task as an example

link (url) [BibTex]

link (url) [BibTex]

1993


no image
Roles for memory-based learning in robotics

Atkeson, C. G., Schaal, S.

In Proceedings of the Sixth International Symposium on Robotics Research, pages: 503-521, Hidden Valley, PA, 1993, clmc (inproceedings)

[BibTex]

1993

[BibTex]


no image
Open loop stable control strategies for robot juggling

Schaal, S., Atkeson, C. G.

In IEEE International Conference on Robotics and Automation, 3, pages: 913-918, Piscataway, NJ: IEEE, Georgia, Atlanta, May 2-6, 1993, clmc (inproceedings)

Abstract
In a series of case studies out of the field of dynamic manipulation (Mason, 1992), different principles for open loop stable control are introduced and analyzed. This investigation may provide some insight into how open loop control can serve as a useful foundation for closed loop control and, particularly, what to focus on in learning control. 

link (url) [BibTex]

link (url) [BibTex]

1992


no image
What should be learned?

Schaal, S., Atkeson, C. G., Botros, S.

In Proceedings of Seventh Yale Workshop on Adaptive and Learning Systems, pages: 199-204, New Haven, CT, May 20-22, 1992, clmc (inproceedings)

[BibTex]

1992

[BibTex]