Context Encoders: Feature Learning by Inpainting

TitleContext Encoders: Feature Learning by Inpainting
Publication TypeConference Paper
Year of Publication2016
AuthorsPathak, D., Krahenbuhl P., Donahue J., Darrell T., & Efros A. A.
Published inThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Page(s)2536-2544
Date Published06/2016
Abstract

We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

Acknowledgment

The authors would like to thank Amanda Buster for the artwork on Fig. 1b, as well as Shubham Tulsiani and Saurabh Gupta for helpful discussions. This work was supported in part by DARPA, AFRL, Intel, DoD MURI award N000141110688, NSF awards IIS1212798, IIS-1427425, and IIS-1536003, the Berkeley Vision and Learning Center and Berkeley Deep Drive.

URLhttp://www.icsi.berkeley.edu/pubs/vision/contextencoders16.pdf
ICSI Research Group

Vision