Jim Mainprice is leading a research group at the University of Stuttgart, Germany. His research interests include motion planning, machine learning and human-robot interaction. He holds a M.Sc. from Polytech'Montpellier, France, and a Ph.D. from the University of Toulouse, France, which he received in 2009 and 2012 respectively. While completing his Ph.D. at LAAS-CNRS, he took part in the EU FP7 projects Dexmart and Saphari. From January 2013 to December 2014, he was a postdoctoral researcher in the Autonomous Robotic Collaboration Lab at the Worcester Polytechnic Institute (WPI) located in Massachusetts, USA, where he participated in the DARPA Robotics Challenge as a member of the DRCHubo team. Since January 2015, he is affiliated with the Autonomous Motion Department (AMD) of the Max Planck Institute for Intelligent Systems in Tübingen, Germany, and he leads the Humans to Robots Motion (HRM) research group at the University of Stuttgart since April 2017.
29th IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 2020), August 2020 (conference) Accepted
We propose a method which generates reactive
robot behavior learned from human demonstration. In order
to do so, we use the Playful programming language which is
based on the reactive programming paradigm. This allows us to
represent the learned behavior as a set of associations between
sensor and motor primitives in a human readable script.
Distinguishing between sensor and motor primitives introduces
a supplementary level of granularity and more importantly
enforces feedback, increasing adaptability and robustness. As
the experimental section shows, useful behaviors may be learned
from a single demonstration covering a very limited portion of
the task space.
IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems