Field Studies with Multimedia Big Data: Opportunities and Challenges

TitleField Studies with Multimedia Big Data: Opportunities and Challenges
Publication TypeTechnical Report
Year of Publication2017
AuthorsKrell, M. Michael, Bernd J., Li Y., Ma D., Choi J., Ellsworth M., Borth D., & Friedland G.
Abstract

Social multimedia users are increasingly sharing all kinds of data about the world. They do this for their own reasons, not to provide data for field studies—but the trend presents a great opportunity for scientists. The Yahoo Flickr Creative Commons 100 Million (YFCC100M) dataset comprises 99 million images and nearly 800 thousand videos from Flickr, all shared under Creative Commons licenses. To enable scientists to leverage these media records for field studies, we propose a new framework that extracts targeted subcorpora from the YFCC100M, in a format usable by researchers who are not experts in big data retrieval and processing. This paper discusses a number of examples from the literature—as well as some entirely new ideas—of natural and social science field studies that could be piloted, supplemented, replicated, or conducted using YFCC100M data. These examples illustrate the need for a general new opensource framework for Multimedia Big Data Field Studies. There is currently a gap between the separate aspects of what multimedia researchers have shown to be possible with consumer-produced big data and the follow-through of creating a comprehensive field study framework that supports scientists across other disciplines. To bridge this gap, we must meet several challenges. For example, the framework must handle unlabeled and noisily labeled data to produce a filtered dataset for a scientist—who naturally wants it to be both as large and as clean as possible. This requires an iterative approach that provides access to statistical summaries and refines the search by constructing new classifiers. The first phase of our framework is available as Multimedia Commons Search (http://search. mmcommons.org, MMCS), an intuitive interface that enables complex search queries at a large scale. After outlining our proposal for the general framework and discussing the potential example studies, this paper describes and evaluates a practical application to the study of origami

Acknowledgment

We would like to thank Per Pascal Grube for his assistance with the AWS search server and Rick Jaffe and John Lowe for helping us make the Solr search engine open source. We would also like to thank Angjoo Kanazawa for sharing expertise on 3D pose estimation, Roman Fedorov on snow index detection, David M. Romps on cloud coverage estimation, and Elise Stickles on language and human behavior. Thanks also to Alan Woodley and anonymous reviewers for providing feedback on the paper. Finally, we thank all the people who are providing the datasets and annotations being integrated into our MMBDS framework. This work was supported by a fellowship from the FITweltweit program of the German Academic Exchange Service (DAAD), by the Undergraduate Research Apprenticeship Program (URAP) at University of California, Berkeley, by grants from the U.S. National Science Foundation (1251276 and 1629990), and by a collaborative Laboratory Directed Research & Development grant led by Lawrence Livermore National Laboratory (U.S. Dept. of Energy contract DE-AC52-07NA27344). (Findings and conclusions are those of the authors, and do not necessarily represent the views of the funders.) 

URLhttp://www.icsi.berkeley.edu/pubs/multimedia/bigdatastudies17.pdf
ICSI Research Group

Audio and Multimedia