Robust Constraint-consistent Learning

Matthew Howard, Stefan Klanke, Michael Gienger, Christian Goerick, Sethu Vijayakumar

Research output: Chapter in Book/Report/Conference proceedingConference paper

3 Citations (Scopus)

Abstract

Many everyday human skills can be framed in terms of performing some task subject to constraints imposed by the environment. Constraints are usually unobservable and frequently change between contexts. In this paper, we present a novel approach for learning (unconstrained) control policies from movement data, where observations are recorded under different constraint settings. Our approach seamlessly integrates unconstrained and constrained observations by performing hybrid optimisation of two risk functionals. The first is a novel risk functional that makes a meaningful comparison between the estimated policy and constrained observations. The second is the standard risk, used to reduce the expected error under impoverished sets of constraints. We demonstrate our approach on systems of varying complexity, and illustrate its utility for transfer learning of a car washing task from human motion capture data.
Original languageEnglish
Title of host publicationIROS 2009. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009
PublisherIEEE
Pages4629-4636
Number of pages8
ISBN (Print)978-1-4244-3803-7
DOIs
Publication statusPublished - 2009

Fingerprint

Dive into the research topics of 'Robust Constraint-consistent Learning'. Together they form a unique fingerprint.

Cite this