Understanding object descriptions in robotics by open-vocabulary object retrieval and detection

TitleUnderstanding object descriptions in robotics by open-vocabulary object retrieval and detection
Publication TypeJournal Article
Year of Publication2015
AuthorsGuadarrama, S., Rodner E., Saenko K., & Darrell T.
Published inThe International Journal of Robotics Research
Volume35
Issue1-3
Page(s)265-280
Date Published10/2015
Abstract

We address the problem of retrieving and detecting objects based on open-vocabulary natural language queries: given a phrase describing a specific object, for example “the corn flakes box”, the task is to find the best match in a set of images containing candidate objects. When naming objects, humans tend to use natural language with rich semantics, including basic-level categories, fine-grained categories, and instance-level concepts such as brand names. Existing approaches to large-scale object recognition fail in this scenario, as they expect queries that map directly to a fixed set of pre-trained visual categories, for example ImageNet synset tags. We address this limitation by introducing a novel object retrieval method. Given a candidate object image, we first map it to a set of words that are likely to describe it, using several learned image-to-text projections. We also propose a method for handling open vocabularies, that is, words not contained in the training data. We then compare the natural language query to the sets of words predicted for each candidate and select the best match. Our method can combine category- and instance-level semantics in a common representation. We present extensive experimental results on several datasets using both instance-level and category-level matching and show that our approach can accurately retrieve objects based on extremely varied open-vocabulary queries. Furthermore, we show how to process queries referring to objects within scenes, using state-of-the-art adapted detectors. The source code of our approach will be publicly available together with pre-trained models at http://openvoc.berkeleyvision.org and could be directly used for robotics applications.

URLhttp://dx.doi.org/10.1177/0278364915602059
DOI10.1177/0278364915602059
ICSI Research Group

Vision