4.6 Article

Cluster-sensitive Structured Correlation Analysis for Web cross-modal retrieval

期刊

NEUROCOMPUTING
卷 168, 期 -, 页码 747-760

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2015.05.049

关键词

Correlation learning; Cluster-sensitive; Structured correlation model; Correspondence missing

资金

  1. National Basic Research Program of China (973 Program) [2012CB316400, 2015CB351802]
  2. 863 program of China [2014AA015202]
  3. National Natural Science Foundation of China (NSFC) [61025011, 61303160, 61332016, 61390511, 61322212, 61473273, 61429201]
  4. ARO Grant [W911NF-12-1-0057]
  5. NEC Laboratories of America

向作者/读者索取更多资源

Modern cross-modal retrieving technology is required to find semantically relevant content from heterogeneous modalities. As previous studies construct unified dense correlation models on small scale cross-modal data, they are not capable of processing large scale Web data, because (a) the content of Web cross media is divergent; (b) the topic sensitive structure information in the high dimensional space is neglected; and (c) data should be organized as strictly corresponding pairs, which is not satisfied in real world scenarios. To address these challenges, we propose a cluster-sensitive cross-modal correlation learning framework. First, a set of cluster-sensitive correlation sub-models are learned instead of a unified correlation model, which better fits the content divergence in different modalities. We impose structured sparsity regularization on the projection vectors to learn a set of interpretable structured sparse correlation sub-models. Second, to compensate for the correspondence missing, we take full advantage of both intra-modal affinity and inter-modal co-occurrence. The projected coordinates of adjacent data within a modality tend to be similar, and the inconsistency of cluster-sensitive projection is minimized. The learned correlation model adapts to the content divergence and thus achieves better model generality and bias-variance trade-off. Extensive experiments on two large scale cross-modal data demonstrate the effectiveness of our approach. (C) 2015 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据