期刊
出版社
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3374754
关键词
Cross-modal retrieval; heterogeneous data; deep learning; Wiki-Flickr Event dataset
类别
资金
- National Natural Science Foundation of China [61703109, 91748107, 61902077, 61876065, U1611461]
- Guangdong Innovative Research Team Program [2014ZT05G157]
- Natural Science Foundation of Guangdong Province, China [2018A0303130022]
- Science and Technology Program of Guangzhou, China [201904010200]
- Science and Technology Planning Project of Guangdong Province, China [2016A010101012]
- Research Grants Council of the Hong Kong Special Administrative Region, China (Collaborative Research Fund) [C1031-18G]
In this article, we propose to learn shared semantic space with correlation alignment (S-3 CA) for multimodal data representations, which aligns nonlinear correlations of multimodal data distributions in deep neural networks designed for heterogeneous data. In the context of cross-modal (event) retrieval, we design a neural network with convolutional layers and fully connected layers to extract features for images, including images on Flickr-like social media. Simultaneously, we exploit a fully connected neural network to extract semantic features for text documents, including news articles from news media. In particular, nonlinear correlations of layer activations in the two neural networks are aligned with correlation alignment during the joint training of the networks. Furthermore, we project the multimodal data into a shared semantic space for cross-modal (event) retrieval, where the distances between heterogeneous data samples can be measured directly. In addition, we contribute a Wiki-Flickr Event dataset, where the multimodal data samples are not describing each other in pairs like the existing paired datasets, but all of them are describing semantic events. Extensive experiments conducted on both paired and unpaired datasets manifest the effectiveness of S-3 CA, outperforming the state-of-the-art methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据