Autonomous Motion
Note: This department has relocated.

Computational model for movement learning under uncertain cost

2008

Conference Paper

am


Stochastic optimal control is a framework for computing control commands that lead to an optimal behavior under a given cost. Despite the long history of optimal control in engineering, it has been only recently applied to describe human motion. So far, stochastic optimal control has been mainly used in tasks that are already learned, such as reaching to a target. For learning, however, there are only few cases where optimal control has been applied. The main assumptions of stochastic optimal control that restrict its application to tasks after learning are the a priori knowledge of (1) a quadratic cost function (2) a state space model that captures the kinematics and/or dynamics of musculoskeletal system and (3) a measurement equation that models the proprioceptive and/or exteroceptive feedback. Under these assumptions, a sequence of control gains is computed that is optimal with respect to the prespecified cost function. In our work, we relax the assumption of the a priori known cost function and provide a computational framework for modeling tasks that involve learning. Typically, a cost function consists of two parts: one part that models the task constraints, like squared distance to goal at movement endpoint, and one part that integrates over the squared control commands. In learning a task, the first part of this cost function will be adapted. We use an expectation-maximization scheme for learning: the expectation step optimizes the task constraints through gradient descent of a reward function and the maximizing step optimizes the control commands. Our computational model is tested and compared with data given from a behavioral experiment. In this experiment, subjects sit in front of a drawing tablet and look at a screen onto which the drawing-pen's position is projected. Beginning from a start point, their task is to move with the pen through a target point presented on screen. Visual feedback about the pen's position is given only before movement onset. At the end of a movement, subjects get visual feedback only about the cost of this trial. In the mapping of the pen's position onto the screen, we added a bias (unknown to subject) and Gaussian noise. Therefore the cost is a function of this bias. The subjects were asked to reach to the target and minimize this cost over trials. In this behavioral experiment, subjects could learn the bias and thus showed reinforcement learning. With our computational model, we could model the learning process over trials. Particularly, the dependence on parameters of the reward function (Gaussian width) and the modulation of movement variance over time were similar in experiment and model.

Author(s): Theodorou, E. and Hoffmann, H. and Mistry, M. and Schaal, S.
Book Title: Abstracts of the Society of Neuroscience Meeting (SFN 2008)
Year: 2008

Department(s): Autonomous Motion
Bibtex Type: Conference Paper (inproceedings)

Address: Washington, DC 2008
Cross Ref: p10267
Note: clmc

BibTex

@inproceedings{Theodorou_ASNM_2008,
  title = {Computational model for movement learning under uncertain cost},
  author = {Theodorou, E. and Hoffmann, H. and Mistry, M. and Schaal, S.},
  booktitle = {Abstracts of the Society of Neuroscience Meeting (SFN 2008)},
  address = {Washington, DC 2008},
  year = {2008},
  note = {clmc},
  doi = {},
  crossref = {p10267}
}