A Novel Fusion Method for Integrating Multiple Modalities and Knowledge for Multimodal Location Estimation

TitleA Novel Fusion Method for Integrating Multiple Modalities and Knowledge for Multimodal Location Estimation
Publication TypeConference Paper
Year of Publication2013
AuthorsKelm, P., Schmiedeke S., Choi J., Friedland G., Ekambaram V., Ramchandran K., & Sikora T.
Page(s)7-12
Other Numbers3650
Abstract

This article describes a novel fusion approach using multiple modalities and knowledge sources that improves the accuracy of multimodal location estimation algorithms. The problem of "multimodal location estimation" or "placing" involves associating geo-locations with consumer-produced nmultimedia data like videos or photos that have not been tagged using GPS. Our algorithm effectively integrates data from the visual and textual modalities with external geographical knowledge bases by building a hierarchical model that combines data-driven and semantic methods to group visual and textual features together within geographical regions. We evaluate our algorithm on the MediaEval 2010 Placing Task dataset and show that our system significantly outperforms other state-of-the-art approaches, successfully locating about 40% of the videos to within a radius of 100 m.

URLhttps://www.icsi.berkeley.edu/pubs/multimedia/novelfusion13.pdf
Bibliographic Notes

Proceedings of the Second ACM International Workshop on Geotagging and Its Application in Multimedia, Barcelona, Spain, pp. 7-12

Abbreviated Authors

P. Kelm, S. Schmiedeke, J. Choi, G. Friedland, V. Ekambaram, K. Ramchandran, and T. Sikora

ICSI Research Group

Audio and Multimedia

ICSI Publication Type

Article in conference proceedings