Adapting deep visuomotor representations with weak pairwise constraints

TitleAdapting deep visuomotor representations with weak pairwise constraints
Publication TypeConference Paper
Year of Publication2016
AuthorsTzeng, E., Devin C., Hoffman J., Finn C., Abbeel P., Levine S., Saenko K., & Darrell T.
Published inWorkshop on the Algorithmic Foundations of Robotics (WAFR)
Date Published2016
Abstract

Real-world robotics problems often occur in domains that differ significantly from the robot's prior training environment. For many robotic control tasks, real world experience is expensive to obtain, but data is easy to collect in either an instrumented environment or in simulation. We propose a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset (e.g. synthetic images) to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search. Supervised domain adaptation methods minimize cross-domain differences using pairs of aligned images that contain the same object or scene in both the source and target domains, thus learning a domain-invariant representation. However, they require manual alignment of such image pairs. Fully unsupervised adaptation methods rely on minimizing the discrepancy between the feature distributions across domains. We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains. Focusing on adapting from simulation to real world data using a PR2 robot, we evaluate our approach on a manipulation task and show that by using weakly paired images, our method compensates for domain shift more e ectively than previous techniques, enabling better robot performance in the real world.

URLhttp://www.icsi.berkeley.edu/pubs/vision/weakpairwiseconstraints16.pdf
ICSI Research Group

Vision