4.5 Article

Diverse reduct subspaces based co-training for partially labeled data

Journal

INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
Volume 52, Issue 8, Pages 1103-1117

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ijar.2011.05.006

Keywords

Rough set theory; Markov blanket; Attribute reduction; Rough co-training; Partially labeled data

Funding

  1. National Natural Science Foundation of China [60970061, 61075056]
  2. Shanghai Leading Academic Discipline Project [B004]

Ask authors/readers for more resources

Rough set theory is an effective supervised learning model for labeled data. However, it is often the case that practical problems involve both labeled and unlabeled data, which is outside the realm of traditional rough set theory. In this paper, the problem of attribute reduction for partially labeled data is first studied. With a new definition of discernibility matrix, a Markov blanket based heuristic algorithm is put forward to compute the optimal reduct of partially labeled data. A novel rough co-training model is then proposed, which could capitalize on the unlabeled data to improve the performance of rough classifier learned only from few labeled data. The model employs two diverse reducts of partially labeled data to train its base classifiers on the labeled data, and then makes the base classifiers learn from each other on the unlabeled data iteratively. The classifiers constructed in different reduct subspaces could benefit from their diversity on the unlabeled data and significantly improve the performance of the rough co-training model. Finally, the rough co-training model is theoretically analyzed, and the upper bound on its performance improvement is given. The experimental results show that the proposed model outperforms other representative models in terms of accuracy and even compares favorably with rough classifier trained on all training data labeled. (C) 2011 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available