Header logo is am


2016


Thumb xl screen shot 2016 06 27 at 09.38.59
Implications of Action-Oriented Paradigm Shifts in Cognitive Science

Dominey, P. F., Prescott, T. J., Bohg, J., Engel, A. K., Gallagher, S., Heed, T., Hoffmann, M., Knoblich, G., Prinz, W., Schwartz, A.

In The Pragmatic Turn - Toward Action-Oriented Views in Cognitive Science, 18, pages: 333-356, 20, Strüngmann Forum Reports, vol. 18, J. Lupp, series editor, (Editors: Andreas K. Engel and Karl J. Friston and Danica Kragic), The MIT Press, 18th Ernst Strüngmann Forum, May 2016 (incollection) In press

Abstract
An action-oriented perspective changes the role of an individual from a passive observer to an actively engaged agent interacting in a closed loop with the world as well as with others. Cognition exists to serve action within a landscape that contains both. This chapter surveys this landscape and addresses the status of the pragmatic turn. Its potential influence on science and the study of cognition are considered (including perception, social cognition, social interaction, sensorimotor entrainment, and language acquisition) and its impact on how neuroscience is studied is also investigated (with the notion that brains do not passively build models, but instead support the guidance of action). A review of its implications in robotics and engineering includes a discussion of the application of enactive control principles to couple action and perception in robotics as well as the conceptualization of system design in a more holistic, less modular manner. Practical applications that can impact the human condition are reviewed (e.g. educational applications, treatment possibilities for developmental and psychopathological disorders, the development of neural prostheses). All of this foreshadows the potential societal implications of the pragmatic turn. The chapter concludes that an action-oriented approach emphasizes a continuum of interaction between technical aspects of cognitive systems and robotics, biology, psychology, the social sciences, and the humanities, where the individual is part of a grounded cultural system.

The Pragmatic Turn - Toward Action-Oriented Views in Cognitive Science 18th Ernst Strüngmann Forum Bibliography Chapter link (url) [BibTex]

2016

The Pragmatic Turn - Toward Action-Oriented Views in Cognitive Science 18th Ernst Strüngmann Forum Bibliography Chapter link (url) [BibTex]


Thumb xl looplearning
Learning Action-Perception Cycles in Robotics: A Question of Representations and Embodiment

Bohg, J., Kragic, D.

In The Pragmatic Turn - Toward Action-Oriented Views in Cognitive Science, 18, pages: 309-320, 18, Strüngmann Forum Reports, vol. 18, J. Lupp, series editor, (Editors: Andreas K. Engel and Karl J. Friston and Danica Kragic), The MIT Press, 18th Ernst Strüngmann Forum, May 2016 (incollection) In press

Abstract
Since the 1950s, robotics research has sought to build a general-purpose agent capable of autonomous, open-ended interaction with realistic, unconstrained environments. Cognition is perceived to be at the core of this process, yet understanding has been challenged because cognition is referred to differently within and across research areas, and is not clearly defined. The classic robotics approach is decomposition into functional modules which perform planning, reasoning, and problem-solving or provide input to these mechanisms. Although advancements have been made and numerous success stories reported in specific niches, this systems-engineering approach has not succeeded in building such a cognitive agent. The emergence of an action-oriented paradigm offers a new approach: action and perception are no longer separable into functional modules but must be considered in a complete loop. This chapter reviews work on different mechanisms for action- perception learning and discusses the role of embodiment in the design of the underlying representations and learning. It discusses the evaluation of agents and suggests the development of a new embodied Turing Test. Appropriate scenarios need to be devised in addition to current competitions, so that abilities can be tested over long time periods.

18th Ernst Strüngmann Forum The Pragmatic Turn- Toward Action-Oriented Views in Cognitive Science Bibliography Chapter link (url) [BibTex]

18th Ernst Strüngmann Forum The Pragmatic Turn- Toward Action-Oriented Views in Cognitive Science Bibliography Chapter link (url) [BibTex]


no image
Locally Weighted Regression for Control

Ting, J., Meier, F., Vijayakumar, S., Schaal, S.

In Encyclopedia of Machine Learning and Data Mining, pages: 1-14, Springer US, Boston, MA, 2016 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2015


Thumb xl mt cover
Gaussian Process Optimization for Self-Tuning Control

Marco, A.

Polytechnic University of Catalonia (BarcelonaTech), October 2015 (mastersthesis)

PDF Project Page [BibTex]

2015

PDF Project Page [BibTex]


no image
Adaptive and Learning Concepts in Hydraulic Force Control

Doerr, A.

University of Stuttgart, September 2015 (mastersthesis)

[BibTex]

[BibTex]


Thumb xl thesis bild
Object Detection Using Deep Learning - Learning where to search using visual attention

Kloss, A.

Eberhard Karls Universität Tübingen, May 2015 (mastersthesis)

Abstract
Detecting and identifying the different objects in an image fast and reliably is an important skill for interacting with one’s environment. The main problem is that in theory, all parts of an image have to be searched for objects on many different scales to make sure that no object instance is missed. It however takes considerable time and effort to actually classify the content of a given image region and both time and computational capacities that an agent can spend on classification are limited. Humans use a process called visual attention to quickly decide which locations of an image need to be processed in detail and which can be ignored. This allows us to deal with the huge amount of visual information and to employ the capacities of our visual system efficiently. For computer vision, researchers have to deal with exactly the same problems, so learning from the behaviour of humans provides a promising way to improve existing algorithms. In the presented master’s thesis, a model is trained with eye tracking data recorded from 15 participants that were asked to search images for objects from three different categories. It uses a deep convolutional neural network to extract features from the input image that are then combined to form a saliency map. This map provides information about which image regions are interesting when searching for the given target object and can thus be used to reduce the parts of the image that have to be processed in detail. The method is based on a recent publication of Kümmerer et al., but in contrast to the original method that computes general, task independent saliency, the presented model is supposed to respond differently when searching for different target categories.

PDF Project Page [BibTex]


Thumb xl picture for website
Robot Arm Tracking with Random Decision Forests

Widmaier, F.

Eberhard-Karls-Universität Tübingen, May 2015 (mastersthesis)

Abstract
For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial for successful controlling its motion. Often, pose estimations can be acquired from encoders inside the arm, but they can have significant inaccuracy which makes the use of additional techniques necessary. In this master thesis, a novel approach of robot arm pose estimation is presented, that works on single depth images without the need of prior foreground segmentation or other preprocessing steps. A random regression forest is used, which is trained only on synthetically generated data. The approach improves former work by Bohg et al. by considerably reducing the computational effort both at training and test time. The forest in the new method directly estimates the desired joint angles while in the former approach, the forest casts 3D position votes for the joints, which then have to be clustered and fed into an iterative inverse kinematic process to finally get the joint angles. To improve the estimation accuracy, the standard training objective of the forest training is replaced by a specialized function that makes use of a model-dependent distance metric, called DISP. Experimental results show that the specialized objective indeed improves pose estimation and it is shown that the method, despite of being trained on synthetic data only, is able to provide reasonable estimations for real data at test time.

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Lernende Roboter

Trimpe, S.

In Jahrbuch der Max-Planck-Gesellschaft, Max Planck Society, May 2015, (popular science article in German) (inbook)

link (url) [BibTex]

link (url) [BibTex]


no image
Autonomous Robots

Schaal, S.

In Jahrbuch der Max-Planck-Gesellschaft, May 2015 (incollection)

[BibTex]

[BibTex]


no image
Policy Search for Imitation Learning

Doerr, A.

University of Stuttgart, January 2015 (thesis)

link (url) Project Page [BibTex]


Thumb xl tacit
Tacit Learning for Emergence of Task-Related Behaviour through Signal Accumulation

Berenz, V., Alnajjar, F., Hayashibe, M., Shimoda, S.

In Emergent Trends in Robotics and Intelligent Systems: Where is the Role of Intelligent Technologies in the Next Generation of Robots?, pages: 31-38, Springer International Publishing, Cham, 2015 (inbook)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Robot Learning

Peters, J., Lee, D., Kober, J., Nguyen-Tuong, D., Bagnell, J. A., Schaal, S.

In Springer Handbook of Robotics 2nd Edition, pages: 1371-1394, Springer Berlin Heidelberg, Berlin, Heidelberg, 2015 (incollection)

[BibTex]

[BibTex]

2011


Thumb xl screen shot 2015 08 23 at 15.47.13
Multi-Modal Scene Understanding for Robotic Grasping

Bohg, J.

(2011:17):vi, 194, Trita-CSC-A, KTH Royal Institute of Technology, KTH, Computer Vision and Active Perception, CVAP, Centre for Autonomous Systems, CAS, KTH, Centre for Autonomous Systems, CAS, December 2011 (phdthesis)

Abstract
Current robotics research is largely driven by the vision of creating an intelligent being that can perform dangerous, difficult or unpopular tasks. These can for example be exploring the surface of planet mars or the bottom of the ocean, maintaining a furnace or assembling a car. They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, are already frequently performed by robots. Others are still completely out of reach. Especially, household robots are far away from being deployable as general purpose devices. Although advancements have been made in this research area, robots are not yet able to perform household chores robustly in unstructured and open-ended environments given unexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual and motor capabilities are necessary for the robot to perform common tasks in a household scenario. In that context, an essential capability is to understand the scene that the robot has to interact with. This involves separating objects from the background but also from each other.Once this is achieved, many other tasks become much easier. Configuration of object scan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and even state-of-the-art methods may fail. Given an incomplete, noisy and potentially erroneously segmented scene model, the questions remain how suitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of prediction mechanisms that allow it to hypothesize about parts of the scene it has not yet observed. Additionally, the robot can also quantify how uncertain it is about this prediction allowing it to plan actions for exploring the scene at specifically uncertain places. We consider multiple modalities including monocular and stereo vision, haptic sensing and information obtained through a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modal exploration, grasps can be inferred for each object hypothesis. Dependent on whether the objects are known, familiar or unknown, different methodologies for grasp inference apply. In this thesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed and open-loop manner showing the effectiveness of the proposed methods in real-world scenarios.

pdf [BibTex]

2011

pdf [BibTex]


no image
Iterative path integral stochastic optimal control: Theory and applications to motor control

Theodorou, E. A.

University of Southern California, University of Southern California, Los Angeles, CA, 2011 (phdthesis)

PDF [BibTex]

PDF [BibTex]


no image
Learning of grasp selection based on shape-templates

Herzog, A.

Karlsruhe Institute of Technology, 2011 (mastersthesis)

[BibTex]

[BibTex]

2010


no image
Locally weighted regression for control

Ting, J., Vijayakumar, S., Schaal, S.

In Encyclopedia of Machine Learning, pages: 613-624, (Editors: Sammut, C.;Webb, G. I.), Springer, 2010, clmc (inbook)

Abstract
This is article addresses two topics: learning control and locally weighted regression.

link (url) [BibTex]

2010

link (url) [BibTex]

2000


no image
Biomimetic gaze stabilization

Shibata, T., Schaal, S.

In Robot learning: an Interdisciplinary approach, pages: 31-52, (Editors: Demiris, J.;Birk, A.), World Scientific, 2000, clmc (inbook)

Abstract
Accurate oculomotor control is one of the essential pre-requisites for successful visuomotor coordination. In this paper, we suggest a biologically inspired control system for learning gaze stabilization with a biomimetic robotic oculomotor system. In a stepwise fashion, we develop a control circuit for the vestibulo-ocular reflex (VOR) and the opto-kinetic response (OKR), and add a nonlinear learning network to allow adaptivity. We discuss the parallels and differences of our system with biological oculomotor control and suggest solutions how to deal with nonlinearities and time delays in the control system. In simulation and actual robot studies, we demonstrate that our system can learn gaze stabilization in real time in only a few seconds with high final accuracy.

link (url) [BibTex]

2000

link (url) [BibTex]

1991


no image
Ways to smarter CAD-systems

Ehrlenspiel, K., Schaal, S.

In Proceedings of ICED’91Heurista, pages: 10-16, (Editors: Hubka), Edition, Schriftenreihe WDK 21. Zürich, 1991, clmc (inbook)

[BibTex]

1991

[BibTex]