4.7 Article

High-dimensional supervised feature selection via optimized kernel mutual information

期刊

EXPERT SYSTEMS WITH APPLICATIONS
卷 108, 期 -, 页码 81-95

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2018.04.037

关键词

Feature selection; Kernel method; Mutual information; Classification; Optimize function; Machine learning

资金

  1. Guangdong Provincial Government of China through the Computational Science Innovative Research Team program
  2. Guangdong Province Key Laboratory of Computational Science at Sun Yat-Sen University
  3. Technology Program of GuangDong [2012B091100334]
  4. National Natural Science Foundation of China [11471012]
  5. China Scholarship Council [201506385010]

向作者/读者索取更多资源

Feature selection is very important for pattern recognition to reduce the dimensions of data and to improve the efficiency of learning algorithms. Recent research on new approaches has focused mostly on improving accuracy and reducing computing time. This paper presents a flexible feature-selection method based on an optimized kernel mutual information (OKMI) approach. Mutual information (MI) has been applied successfully in decision trees to rank variables; its aim is to connect class labels with the distribution of experimental data. The use of MI removes irrelevant features and decreases redundant features. However, MI is usually less robust when the data distribution is not centralized. To overcome this problem, we propose to use the OKMI approach, which combines MI and a kernel function. This approach may be used for feature selection with nonlinear models by defining kernels for feature vectors and class-label vectors. By optimizing the objection equations, we develop a new feature-selection algorithm that combines both Ml and kernel learning, we discuss the relationship among various kernel-selection methods. Experiments were conducted to compare the new technique applied to various data sets with other methods, and in each case the OKMI approach performs better than the other methods in terms of feature-classification accuracy and computing time. OKMI method solves the problem of computation complexity in the probability of distribution, and avoids this problem by finding the optimal features at very low computational cost. As a result, the OKMI method with the proposed algorithm is effective and robust over a wide range of real applications on expert systems. Crown Copyright (C) 2018 Published by Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据