4.7 Article

Regularized fisher linear discriminant through two threshold variation strategies for imbalanced problems

期刊

KNOWLEDGE-BASED SYSTEMS
卷 150, 期 -, 页码 57-73

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2018.02.035

关键词

Imbalanced data; Pattern classification; Fisher linear discriminant; Regularization; Heuristic learning

资金

  1. Natural Science Foundation of China [61672227]
  2. Shuguang Program - Shanghai Education Development Foundation
  3. Shanghai Municipal Education Commission
  4. Action Plan for Innovation on Science and Technology Projects of Shanghai [16511101000]

向作者/读者索取更多资源

Fisher Linear Discriminant (FLD) has been widely applied to classification tasks due to its simple structure, analytical optimization, and useful criterion. However, when dealing with imbalanced datasets, even though the weight vector of FLD could be trained correctly to preserve the global distribution information of samples, the threshold of FLD might be seriously misled by the extreme proportion of classes. In order to modify the threshold and preserve the weight vector at the same time so as to improve FLD in imbalanced cases, this paper first regularizes the original FLD in a way inspired by the locality preserving projection, and then utilizes two strategies to optimize the threshold: the multi-thresholds selection strategy trains several FLDs with different empirically-defined thresholds, and then selects the optimal threshold out; the threshold-eliminated strategy generates two hyperplanes parallel to the original one built by FLD, and then utilizes a heuristic similarity metric for prediction. It is seen that the former seeks new threshold instead of the old one, while the latter ignores the original threshold. After introducing both strategies into the regularized FLD, two new classifiers are proposed in this paper and abbreviated as RFLD-S1 and RFLD-S2, respectively. Subsequently, the comprehensive comparison experiments on forty-one datasets among nine typical classifiers validate the effectiveness of the proposed methods. Especially, RFLD-S1 performs better than RFLD-S2 and achieves the best on most datasets. (C) 2018 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据