Visual Grasp Affordances From Appearance-Based Cues

TitleVisual Grasp Affordances From Appearance-Based Cues
Publication TypeConference Paper
Year of Publication2011
AuthorsSong, H. Oh, Fritz M., Gu C., & Darrell T.
Page(s)998-1005
Other Numbers3235
Abstract

In this paper, we investigate the prediction of visualgrasp affordances from 2D measurements. Appearancebasedestimation of grasp affordances is desirable when 3-D scans are unreliable due to clutter or material properties.We develop a general framework for estimating graspaffordances from 2-D sources, including local texture-likemeasures as well as object-category measures that capturepreviously learned grasp strategies. Local approaches toestimating grasp positions have been shown to be effectivein real-world scenarios, but are unable to impart objectlevelbiases and can be prone to false positives. We describehow global cues can be used to compute continuouspose estimates and corresponding grasp point locations,using a max-margin optimization for category-levelcontinuous pose regression. We provide a novel dataset toevaluate visual grasp affordance estimation; on this datasetwe show that a fused method outperforms either local orglobal methods alone, and that continuous pose estimationimproves over discrete output models.

URLhttp://www.icsi.berkeley.edu/pubs/vision/visualgrasp11.pdf
Bibliographic Notes

Proceedings of the First IEEE Workshop on Challenges and Opportunities in Robot Perception at the International Conference on Computer Vision (ICCV 2011), pp. 998-1005, Barcelona, Spain

Abbreviated Authors

H. O. Song, M. Fritz, C. Gu, and T. Darrell

ICSI Research Group

Vision

ICSI Publication Type

Article in conference proceedings