4.6 Article

Learning unified binary codes for cross-modal retrieval via latent semantic hashing

期刊

NEUROCOMPUTING
卷 213, 期 -, 页码 191-203

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2015.11.133

关键词

Cross-modal retrieval; Hashing; Binary representation; Sparse coding; Matrix factorization

资金

  1. Open Fund of the Key Laboratory of Marine Geology and Environment in Chinese Academy of Sciences [MGE2015KG02]
  2. Research Fund of State Key Laboratory of Marine Geology in Tongji University [MGK1407]
  3. Research Fund of State Key Laboratory of Ocean Engineering in Shanghai Jiaotong University [OEK1315]
  4. [24300074]
  5. [15F15077]
  6. Grants-in-Aid for Scientific Research [15F15077] Funding Source: KAKEN

向作者/读者索取更多资源

Nowadays the amount of multimedia data such as images and text is growing exponentially on social websites, arousing the demand of effective and efficient cross-modal retrieval. The cross-modal hashing based methods have attracted considerable attention recently as they can learn efficient binary codes for heterogeneous data, which enables large-scale similarity search. Generally, to effectively construct the cross-correlation between different modalities, these methods try to find a joint abstraction space where the heterogeneous data can be projected. Then a quantization rule is applied to convert the abstraction representation to binary codes. However, these methods may not effectively bridge the semantic gap through the latent abstraction space because they fail to capture latent information between heterogeneous data. In addition, most of these methods apply the simplest quantization scheme (i.e. sign function) which may cause information loss of the abstraction representation and result in inferior binary codes. To address these challenges, in this paper, we present a novel cross-modal hashing based method that generates unified binary codes combining different modalities. Specifically, we first extract semantic features from the modalities of images and text to capture latent information. Then these semantic features are projected to a joint abstraction space. Finally, the abstraction space is rotated to produce better unified binary codes with much less quantization loss, while preserving the locality structure of projected data. We integrate the binary code learning procedures above to develop an iterative algorithm for optimal solutions. Moreover, we further exploit the useful class label information to reduce the semantic gap between different modalities to benefit the binary code learning. Extensive experiments on four multimedia datasets show that the proposed binary coding schemes outperform several other state-of-the-art methods under cross-modal scenarios. (C) 2016 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据