Header logo is am


2010


no image
Locally weighted regression for control

Ting, J., Vijayakumar, S., Schaal, S.

In Encyclopedia of Machine Learning, pages: 613-624, (Editors: Sammut, C.;Webb, G. I.), Springer, 2010, clmc (inbook)

Abstract
This is article addresses two topics: learning control and locally weighted regression.

link (url) [BibTex]

2010

link (url) [BibTex]

2008


no image
Efficient inverse kinematics algorithms for highdimensional movement systems

Tevatia, G., Schaal, S.

CLMC Technical Report: TR-CLMC-2008-1, 2008, clmc (techreport)

Abstract
Real-time control of the endeffector of a humanoid robot in external coordinates requires computationally efficient solutions of the inverse kinematics problem. In this context, this paper investigates methods of resolved motion rate control (RMRC) that employ optimization criteria to resolve kinematic redundancies. In particular we focus on two established techniques, the pseudo inverse with explicit optimization and the extended Jacobian method. We prove that the extended Jacobian method includes pseudo-inverse methods as a special solution. In terms of computational complexity, however, pseudo-inverse and extended Jacobian differ significantly in favor of pseudo-inverse methods. Employing numerical estimation techniques, we introduce a computationally efficient version of the extended Jacobian with performance comparable to the original version. Our results are illustrated in simulation studies with a multiple degree-offreedom robot, and were evaluated on an actual 30 degree-of-freedom full-body humanoid robot.

link (url) [BibTex]

2008

link (url) [BibTex]

1996


no image
From isolation to cooperation: An alternative of a system of experts

Schaal, S., Atkeson, C. G.

In Advances in Neural Information Processing Systems 8, pages: 605-611, (Editors: Touretzky, D. S.;Mozer, M. C.;Hasselmo, M. E.), MIT Press, Cambridge, MA, 1996, clmc (inbook)

Abstract
We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to adjust the size and shape of the receptive field in which its predictions are valid, and also to adjust its bias on the importance of individual input dimensions. The size and shape adjustment corresponds to finding a local distance metric, while the bias adjustment accomplishes local dimensionality reduction. We derive asymptotic results for our method. In a variety of simulations we demonstrate the properties of the algorithm with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning. 

link (url) [BibTex]

1996

link (url) [BibTex]

1991


no image
Ways to smarter CAD-systems

Ehrlenspiel, K., Schaal, S.

In Proceedings of ICED’91Heurista, pages: 10-16, (Editors: Hubka), Edition, Schriftenreihe WDK 21. Zürich, 1991, clmc (inbook)

[BibTex]

1991

[BibTex]