Header logo is am


2013


Thumb xl multi modal
3-D Object Reconstruction of Symmetric Objects by Fusing Visual and Tactile Sensing

Illonen, J., Bohg, J., Kyrki, V.

The International Journal of Robotics Research, 33(2):321-341, Sage, October 2013 (article)

Abstract
In this work, we propose to reconstruct a complete 3-D model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated. A grasp is executed on the object with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the initial full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks. The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterative extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.

Web DOI Project Page [BibTex]

2013

Web DOI Project Page [BibTex]


no image
Optimal control of reaching includes kinematic constraints

Mistry, M., Theodorou, E., Schaal, S., Kawato, M.

Journal of Neurophysiology, 2013, clmc (article)

Abstract
We investigate adaptation under a reaching task with an acceleration-based force field perturbation designed to alter the nominal straight hand trajectory in a potentially benign manner:pushing the hand of course in one direction before subsequently restoring towards the target. In this particular task, an explicit strategy to reduce motor effort requires a distinct deviation from the nominal rectilinear hand trajectory. Rather, our results display a clear directional preference during learning, as subjects adapted perturbed curved trajectories towards their initial baselines. We model this behavior using the framework of stochastic optimal control theory and an objective function that trades-of the discordant requirements of 1) target accuracy, 2) motor effort, and 3) desired trajectory. Our work addresses the underlying objective of a reaching movement, and we suggest that robustness, particularly against internal model uncertainly, is as essential to the reaching task as terminal accuracy and energy effciency.

PDF [BibTex]

PDF [BibTex]


no image
Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors

Ijspeert, A., Nakanishi, J., Pastor, P., Hoffmann, H., Schaal, S.

Neural Computation, (25):328-373, 2013, clmc (article)

Abstract
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by meansof a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.

link (url) [BibTex]

link (url) [BibTex]


no image
Using Torque Redundancy to Optimize Contact Forces in Legged Robots

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

In Redundancy in Robot Manipulators and Multi-Robot Systems, 57, pages: 35-51, Lecture Notes in Electrical Engineering, Springer Berlin Heidelberg, 2013 (incollection)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In the following, we present an inverse dynamics controller that exploits torque redundancy to directly and explicitly minimize any combination of linear and quadratic costs in the contact constraints and in the commands. Such a result is particularly relevant for legged robots as it allows to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, it can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The proposed controller is very simple and computationally efficient, and most importantly it can greatly improve the performance of legged locomotion on difficult terrains as can be seen in the experimental results.

link (url) [BibTex]

link (url) [BibTex]


no image
Optimal distribution of contact forces with inverse-dynamics control

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

The International Journal of Robotics Research, 32(3):280-298, March 2013 (article)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of the contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In this contribution we develop an inverse-dynamics controller for floating-base robots under contact constraints that can minimize any combination of linear and quadratic costs in the contact constraints and the commands. Our main result is the exact analytical derivation of the controller. Such a result is particularly relevant for legged robots as it allows us to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, we can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The main advantages of the controller are its simplicity, computational efficiency and robustness to model inaccuracies. We present detailed experimental results on simulated humanoid and quadruped robots as well as a real quadruped robot. The experiments demonstrate that the controller can greatly improve the robustness of locomotion of the robots.1

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2010


no image
Policy learning algorithmis for motor learning (Algorithmen zum automatischen Erlernen von Motorfähigkigkeiten)

Peters, J., Kober, J., Schaal, S.

Automatisierungstechnik, 58(12):688-694, 2010, clmc (article)

Abstract
Robot learning methods which allow au- tonomous robots to adapt to novel situations have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to ful- fill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics. If possible, scaling was usually only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general ap- proach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human- like performance. For doing so, we study two major components for such an approach, i. e., firstly, we study policy learning algo- rithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structu- res for task representation and execution.

link (url) [BibTex]


no image
A Bayesian approach to nonlinear parameter identification for rigid-body dynamics

Ting, J., DSouza, A., Schaal, S.

Neural Networks, 2010, clmc (article)

Abstract
For complex robots such as humanoids, model-based control is highly beneficial for accurate tracking while keeping negative feedback gains low for compliance. However, in such multi degree-of-freedom lightweight systems, conventional identification of rigid body dynamics models using CAD data and actuator models is inaccurate due to unknown nonlinear robot dynamic effects. An alternative method is data-driven parameter estimation, but significant noise in measured and inferred variables affects it adversely. Moreover, standard estimation procedures may give physically inconsistent results due to unmodeled nonlinearities or insufficiently rich data. This paper addresses these problems, proposing a Bayesian system identification technique for linear or piecewise linear systems. Inspired by Factor Analysis regression, we develop a computationally efficient variational Bayesian regression algorithm that is robust to ill-conditioned data, automatically detects relevant features, and identifies input and output noise. We evaluate our approach on rigid body parameter estimation for various robotic systems, achieving an error of up to three times lower than other state-of-the-art machine learning methods.

link (url) [BibTex]


no image
A first optimal control solution for a complex, nonlinear, tendon driven neuromuscular finger model

Theodorou, E. A., Todorov, E., Valero-Cuevas, F.

Proceedings of the ASME 2010 Summer Bioengineering Conference August 30-September 2, 2010, Naples, Florida, USA, 2010, clmc (article)

Abstract
In this work we present the first constrained stochastic op- timal feedback controller applied to a fully nonlinear, tendon driven index finger model. Our model also takes into account an extensor mechanism, and muscle force-length and force-velocity properties. We show this feedback controller is robust to noise and perturbations to the dynamics, while successfully handling the nonlinearities and high dimensionality of the system. By ex- tending prior methods, we are able to approximate physiological realism by ensuring positivity of neural commands and tendon tensions at all timesthus can, for the first time, use the optimal control framework to predict biologically plausible tendon tensions for a nonlinear neuromuscular finger model. METHODS 1 Muscle Model The rigid-body triple pendulum finger model with slightly viscous joints is actuated by Hill-type muscle models. Joint torques are generated by the seven muscles of the index fin-

PDF [BibTex]

PDF [BibTex]


no image
Locally weighted regression for control

Ting, J., Vijayakumar, S., Schaal, S.

In Encyclopedia of Machine Learning, pages: 613-624, (Editors: Sammut, C.;Webb, G. I.), Springer, 2010, clmc (inbook)

Abstract
This is article addresses two topics: learning control and locally weighted regression.

link (url) [BibTex]

link (url) [BibTex]


no image
Efficient learning and feature detection in high dimensional regression

Ting, J., D’Souza, A., Vijayakumar, S., Schaal, S.

Neural Computation, 22, pages: 831-886, 2010, clmc (article)

Abstract
We present a novel algorithm for efficient learning and feature selection in high- dimensional regression problems. We arrive at this model through a modification of the standard regression model, enabling us to derive a probabilistic version of the well-known statistical regression technique of backfitting. Using the Expectation- Maximization algorithm, along with variational approximation methods to overcome intractability, we extend our algorithm to include automatic relevance detection of the input features. This Variational Bayesian Least Squares (VBLS) approach retains its simplicity as a linear model, but offers a novel statistically robust â??black- boxâ? approach to generalized linear regression with high-dimensional inputs. It can be easily extended to nonlinear regression and classification problems. In particular, we derive the framework of sparse Bayesian learning, e.g., the Relevance Vector Machine, with VBLS at its core, offering significant computational and robustness advantages for this class of methods. We evaluate our algorithm on synthetic and neurophysiological data sets, as well as on standard regression and classification benchmark data sets, comparing it with other competitive statistical approaches and demonstrating its suitability as a drop-in replacement for other generalized linear regression techniques.

link (url) [BibTex]

link (url) [BibTex]


no image
Stochastic Differential Dynamic Programming

Theodorou, E., Tassa, Y., Todorov, E.

In the proceedings of American Control Conference (ACC 2010) , 2010, clmc (article)

Abstract
We present a generalization of the classic Differential Dynamic Programming algorithm. We assume the existence of state- and control-dependent process noise, and proceed to derive the second-order expansion of the cost-to-go. Despite having quartic and cubic terms in the initial expression, we show that these vanish, leaving us with the same quadratic structure as standard DDP.

PDF [BibTex]

PDF [BibTex]


no image
Learning control in robotics – trajectory-based opitimal control techniques

Schaal, S., Atkeson, C. G.

Robotics and Automation Magazine, 17(2):20-29, 2010, clmc (article)

Abstract
In a not too distant future, robots will be a natural part of daily life in human society, providing assistance in many areas ranging from clinical applications, education and care giving, to normal household environments [1]. It is hard to imagine that all possible tasks can be preprogrammed in such robots. Robots need to be able to learn, either by themselves or with the help of human supervision. Additionally, wear and tear on robots in daily use needs to be automatically compensated for, which requires a form of continuous self-calibration, another form of learning. Finally, robots need to react to stochastic and dynamic environments, i.e., they need to learn how to optimally adapt to uncertainty and unforeseen changes. Robot learning is going to be a key ingredient for the future of autonomous robots. While robot learning covers a rather large field, from learning to perceive, to plan, to make decisions, etc., we will focus this review on topics of learning control, in particular, as it is concerned with learning control in simulated or actual physical robots. In general, learning control refers to the process of acquiring a control strategy for a particular control system and a particular task by trial and error. Learning control is usually distinguished from adaptive control [2] in that the learning system can have rather general optimization objectivesâ??not just, e.g., minimal tracking errorâ??and is permitted to fail during the process of learning, while adaptive control emphasizes fast convergence without failure. Thus, learning control resembles the way that humans and animals acquire new movement strategies, while adaptive control is a special case of learning control that fulfills stringent performance constraints, e.g., as needed in life-critical systems like airplanes. Learning control has been an active topic of research for at least three decades. However, given the lack of working robots that actually use learning components, more work needs to be done before robot learning will make it beyond the laboratory environment. This article will survey some ongoing and past activities in robot learning to assess where the field stands and where it is going. We will largely focus on nonwheeled robots and less on topics of state estimation, as typically explored in wheeled robots [3]â??6], and we emphasize learning in continuous state-action spaces rather than discrete state-action spaces [7], [8]. We will illustrate the different topics of robot learning with examples from our own research with anthropomorphic and humanoid robots.

link (url) [BibTex]

link (url) [BibTex]


no image
Learning, planning, and control for quadruped locomotion over challenging terrain

Kalakrishnan, M., Buchli, J., Pastor, P., Mistry, M., Schaal, S.

International Journal of Robotics Research, 30(2):236-258, 2010, clmc (article)

Abstract
We present a control architecture for fast quadruped locomotion over rough terrain. We approach the problem by decomposing it into many sub-systems, in which we apply state-of-the-art learning, planning, optimization, and control techniques to achieve robust, fast locomotion. Unique features of our control strategy include: (1) a system that learns optimal foothold choices from expert demonstration using terrain templates, (2) a body trajectory optimizer based on the Zero- Moment Point (ZMP) stability criterion, and (3) a floating-base inverse dynamics controller that, in conjunction with force control, allows for robust, compliant locomotion over unperceived obstacles. We evaluate the performance of our controller by testing it on the LittleDog quadruped robot, over a wide variety of rough terrains of varying difficulty levels. The terrain that the robot was tested on includes rocks, logs, steps, barriers, and gaps, with obstacle sizes up to the leg length of the robot. We demonstrate the generalization ability of this controller by presenting results from testing performed by an independent external test team on terrain that has never been shown to us.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]

2007


no image
The new robotics - towards human-centered machines

Schaal, S.

HFSP Journal Frontiers of Interdisciplinary Research in the Life Sciences, 1(2):115-126, 2007, clmc (article)

Abstract
Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research instiutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

link (url) [BibTex]

2007

link (url) [BibTex]


no image
Dynamics systems vs. optimal control ? a unifying view

Schaal, S, Mohajerian, P., Ijspeert, A.

In Progress in Brain Research, (165):425-445, 2007, clmc (inbook)

Abstract
In the past, computational motor control has been approached from at least two major frameworks: the dynamic systems approach and the viewpoint of optimal control. The dynamic system approach emphasizes motor control as a process of self-organization between an animal and its environment. Nonlinear differential equations that can model entrainment and synchronization behavior are among the most favorable tools of dynamic systems modelers. In contrast, optimal control approaches view motor control as the evolutionary or development result of a nervous system that tries to optimize rather general organizational principles, e.g., energy consumption or accurate task achievement. Optimal control theory is usually employed to develop appropriate theories. Interestingly, there is rather little interaction between dynamic systems and optimal control modelers as the two approaches follow rather different philosophies and are often viewed as diametrically opposing. In this paper, we develop a computational approach to motor control that offers a unifying modeling framework for both dynamic systems and optimal control approaches. In discussions of several behavioral experiments and some theoretical and robotics studies, we demonstrate how our computational ideas allow both the representation of self-organizing processes and the optimization of movement based on reward criteria. Our modeling framework is rather simple and general, and opens opportunities to revisit many previous modeling results from this novel unifying view.

link (url) [BibTex]

link (url) [BibTex]

2004


no image
Discovering optimal imitation strategies

Billard, A., Epars, Y., Calinon, S., Cheng, G., Schaal, S.

Robotics and Autonomous Systems, 47(2-3):68-77, 2004, clmc (article)

Abstract
This paper develops a general policy for learning relevant features of an imitation task. We restrict our study to imitation of manipulative tasks or of gestures. The imitation process is modeled as a hierarchical optimization system, which minimizes the discrepancy between two multi-dimensional datasets. To classify across manipulation strategies, we apply a probabilistic analysis to data in Cartesian and joint spaces. We determine a general metric that optimizes the policy of task reproduction, following strategy determination. The model successfully discovers strategies in six different imitative tasks and controls task reproduction by a full body humanoid robot.

[BibTex]

2004

[BibTex]


no image
Rhythmic movement is not discrete

Schaal, S., Sternad, D., Osu, R., Kawato, M.

Nature Neuroscience, 7(10):1137-1144, 2004, clmc (article)

Abstract
Rhythmic movements, like walking, chewing, or scratching, are phylogenetically old mo-tor behaviors found in many organisms, ranging from insects to primates. In contrast, discrete movements, like reaching, grasping, or kicking, are behaviors that have reached sophistication primarily in younger species, particularly in primates. Neurophysiological and computational research on arm motor control has focused almost exclusively on dis-crete movements, essentially assuming similar neural circuitry for rhythmic tasks. In con-trast, many behavioral studies focused on rhythmic models, subsuming discrete move-ment as a special case. Here, using a human functional neuroimaging experiment, we show that in addition to areas activated in rhythmic movement, discrete movement in-volves several higher cortical planning areas, despite both movement conditions were confined to the same single wrist joint. These results provide the first neuroscientific evi-dence that rhythmic arm movement cannot be part of a more general discrete movement system, and may require separate neurophysiological and theoretical treatment.

link (url) [BibTex]

link (url) [BibTex]


no image
Learning from demonstration and adaptation of biped locomotion

Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S., Kawato, M.

Robotics and Autonomous Systems, 47(2-3):79-91, 2004, clmc (article)

Abstract
In this paper, we introduce a framework for learning biped locomotion using dynamical movement primitives based on non-linear oscillators. Our ultimate goal is to establish a design principle of a controller in order to achieve natural human-like locomotion. We suggest dynamical movement primitives as a central pattern generator (CPG) of a biped robot, an approach we have previously proposed for learning and encoding complex human movements. Demonstrated trajectories are learned through movement primitives by locally weighted regression, and the frequency of the learned trajectories is adjusted automatically by a novel frequency adaptation algorithmbased on phase resetting and entrainment of coupled oscillators. Numerical simulations and experimental implementation on a physical robot demonstrate the effectiveness of the proposed locomotioncontroller.

link (url) [BibTex]

link (url) [BibTex]


no image
Feedback error learning and nonlinear adaptive control

Nakanishi, J., Schaal, S.

Neural Networks, 17(10):1453-1465, 2004, clmc (article)

Abstract
In this paper, we present our theoretical investigations of the technique of feedback error learning (FEL) from the viewpoint of adaptive control. We first discuss the relationship between FEL and nonlinear adaptive control with adaptive feedback linearization, and show that FEL can be interpreted as a form of nonlinear adaptive control. Second, we present a Lyapunov analysis suggesting that the condition of strictly positive realness (SPR) associated with the tracking error dynamics is a sufficient condition for asymptotic stability of the closed-loop dynamics. Specifically, for a class of second order SISO systems, we show that this condition reduces to KD^2 > KP; where KP and KD are positive position and velocity feedback gains, respectively. Moreover, we provide a ÔpassivityÕ-based stability analysis which suggests that SPR of the tracking error dynamics is a necessary and sufficient condition for asymptotic hyperstability. Thus, the condition KD^2>KP mentioned above is not only a sufficient but also necessary condition to guarantee asymptotic hyperstability of FEL, i.e. the tracking error is bounded and asymptotically converges to zero. As a further point, we explore the adaptive control and FEL framework for feedforward control formulations, and derive an additional sufficient condition for asymptotic stability in the sense of Lyapunov. Finally, we present numerical simulations to illustrate the stability properties of FEL obtained from our mathematical analysis.

link (url) [BibTex]

link (url) [BibTex]


no image
Computational approaches to motor learning by imitation

Schaal, S., Ijspeert, A., Billard, A.

In The Neuroscience of Social Interaction, (1431):199-218, (Editors: Frith, C. D.;Wolpert, D.), Oxford University Press, Oxford, 2004, clmc (inbook)

Abstract
Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking - indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions.

link (url) [BibTex]

link (url) [BibTex]

1993


no image
Learning passive motor control strategies with genetic algorithms

Schaal, S., Sternad, D.

In 1992 Lectures in complex systems, pages: 913-918, (Editors: Nadel, L.;Stein, D.), Addison-Wesley, Redwood City, CA, 1993, clmc (inbook)

Abstract
This study investigates learning passive motor control strategies. Passive control is understood as control without active error correction; the movement is stabilized by particular properties of the controlling dynamics. We analyze the task of juggling a ball on a racket. An approximation to the optimal solution of the task is derived by means of optimization theory. In order to model the learning process, the problem is coded for a genetic algorithm in representations without sensory or with sensory information. For all representations the genetic algorithm is able to find passive control strategies, but learning speed and the quality of the outcome are significantly different. A comparison with data from human subjects shows that humans seem to apply yet different movement strategies to the ones proposed. For the feedback representation some implications arise for learning from demonstration.

link (url) [BibTex]

1993

link (url) [BibTex]


no image
A genetic algorithm for evolution from an ecological perspective

Sternad, D., Schaal, S.

In 1992 Lectures in Complex Systems, pages: 223-231, (Editors: Nadel, L.;Stein, D.), Addison-Wesley, Redwood City, CA, 1993, clmc (inbook)

Abstract
In the population model presented, an evolutionary dynamic is explored which is based on the operator characteristics of genetic algorithms. An essential modification in the genetic algorithms is the inclusion of a constraint in the mixing of the gene pool. The pairing for the crossover is governed by a selection principle based on a complementarity criterion derived from the theoretical tenet of perception-action (P-A) mutuality of ecological psychology. According to Swenson and Turvey [37] P-A mutuality underlies evolution and is an integral part of its thermodynamics. The present simulation tested the contribution of P-A-cycles in evolutionary dynamics. A numerical experiment compares the population's evolution with and without this intentional component. The effect is measured in the difference of the rate of energy dissipation, as well as in three operationalized aspects of complexity. The results support the predicted increase in the rate of energy dissipation, paralleled by an increase in the average heterogeneity of the population. Furthermore, the spatio-temporal evolution of the system is tested for the characteristic power-law relations of a nonlinear system poised in a critical state. The frequency distribution of consecutive increases in population size shows a significantly different exponent in functional relationship.

[BibTex]

[BibTex]


no image
Design concurrent calculation: A CAD- and data-integrated approach

Schaal, S., Ehrlenspiel, K.

Journal of Engineering Design, 4, pages: 71-85, 1993, clmc (article)

Abstract
Besides functional regards, product design demands increasingly more for further reaching considerations. Quality alone cannot suffice anymore to compete in the market; design for manufacturability, for assembly, for recycling, etc., are well-known keywords. Those can largely be reduced to the necessity of design for costs. This paper focuses on a CAD-based approach to design concurrent calculation. It will discuss how, in the meantime well-established, tools like feature technology, knowledge-based systems, and relational databases can be blended into one coherent concept to achieve an entirely CAD- and data-integrated cost information tool. This system is able to extract data from the CAD-system, combine it with data about the company specific manufacturing environment, and subsequently autonomously evaluate manufacturability aspects and costs of the given CAD-model. Within minutes the designer gets quantitative in-formation about the major cost sources of his/her design. Additionally, some alternative methods for approximating manu-facturing times from empirical data, namely neural networks and local weighted regression, are introduced.

[BibTex]

[BibTex]