期刊
NEUROCOMPUTING
卷 253, 期 -, 页码 127-134出版社
ELSEVIER
DOI: 10.1016/j.neucom.2016.10.087
关键词
Low-rank representation; Unsupervised feature selection; Feature self-representation; Multimodal; Hypergraph embedding
资金
- China 1000-Plan National Distinguished Professorship
- Nation Natural Science Foundation of China [61263035, 61573270, 61672177, 61262075, 61662017]
- China 973 Program [2013CB329404]
- China Key Research Program [2016YFB1000905]
- Guangxi Natural Science Foundation [2015GXNSFCB139011]
- China Postdoctoral Science Foundation [2015M570837]
- Innovation Project of Guangxi Graduate Education [YCSZ2016046]
- Guangxi High Institutions' Program of Introducing 100 High-Level Overseas Talents
- Guangxi Collaborative Innovation Center of Multi Source Information Integration and Intelligent Processing
- Guangxi Bagui Teams for Innovation and Research
- project Application and Research of Big Data Fusion in Inter City Traffic Integration of The Xijiang River - Pearl River Economic Belt(da shu jv rong he zai xijiang zhujiang jing ji dai cheng ji jiao tong yi ti hua zhong de ying yong yu yan jiu)
Dimension reduction methods always catch many attentions, because it could effectively solve the 'curse of dimensionality' problem. In this paper, we propose an unsupervised feature selection method which could efficiently select a subset of informative features from unlabeled data. We integrate the low-rank constraint, hypergraph theory, and the self-representation property of features in a unified framework to conduct unsupervised feature selection. Specifically, we represent each feature by other features to conduct unsupervised feature selection via the feature-level self-representation property. We then embed a low-rank constraint to consider the relations among features. Moreover, a hypergarph regularizer is utilized to consider both the high-order relations and the local structure of the data. This enables the proposed model to take into account both the global structure of the data (via the low-rank constraint) and the local structure of the data (via the hypgergraph regularizer). We use an l(2),(p)-norm regularizer to satisfy the constraints. Therefore, the proposed model is more robust to the previous models due to achieving better feature selection model. Experimental results on benchmark datasets showed that the proposed method effectively selected the most informative features by removing the adverse effect of irrelevant/redundant features, compared to the state-of-the-art methods. (C) 2017 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据