Factorized Multi-Modal Topic Model

TitleFactorized Multi-Modal Topic Model
Publication TypeConference Paper
Year of Publication2012
AuthorsVirtanen, S., Jia Y., Klami A., & Darrell T.
Other Numbers3450
Abstract

Multi-modal data collections, such as corporaof paired images and text snippets, requireanalysis methods beyond single-view componentand topic models. For continuous observationsthe current dominant approach isbased on extensions of canonical correlationanalysis, factorizing the variation into componentsshared by the different modalitiesand those private to each of them. For countdata, multiple variants of topic models attemptingto tie the modalities together havebeen presented. All of these, however, lackthe ability to learn components private toone modality, and consequently will try toforce dependencies even between minimallycorrelating modalities. In this work we combinethe two approaches by presenting a novelHDP-based topic model that automaticallylearns both shared and private topics. Themodel is shown to be especially useful forquerying the contents of one domain givensamples of the other.

Acknowledgment

AK and SK were supported by the COIN Finnish Centerof Excellence and the FuNeSoMo exchange project.AK was additionally supported by Academy of Finland(decision number 133818) and PASCAL2 EuropeanNetwork of Excellence.

URLhttps://www.icsi.berkeley.edu/pubs/vision/ICSI_factorizedmultimodal12.pdf
Bibliographic Notes

Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), Catalina Island, California

Abbreviated Authors

S. Virtanen, Y. Jia, A. Klami, and T. Darrell

ICSI Research Group

Vision

ICSI Publication Type

Article in conference proceedings