Translating Videos to Natural Language Using Deep Recurrent Neural Networks

TitleTranslating Videos to Natural Language Using Deep Recurrent Neural Networks
Publication TypeConference Paper
Year of Publication2015
AuthorsVenugopalan, S., Xu H., Donahue J., Rohrbach M., Mooney R., & Saenko K.
Other Numbers3750
Abstract

Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.

Acknowledgment

This work was partially supported by a postdoctoral fellowship funded by the Federal Ministry of Education and Research (BMBF) through the FITweltweit program, administered by the German Academic Exchange Service (DAAD).

URLhttp://www.icsi.berkeley.edu/pubs/vision/translatingvideos15.pdf
Bibliographic Notes

To appear in the proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT), Denver, Colorado

Abbreviated Authors

S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko

ICSI Research Group

Vision

ICSI Publication Type

Article in conference proceedings