Journal
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Volume 30, Issue 10, Pages 3072-3083Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2018.2870666
Keywords
Area under the curve (AUC) maximization; feature selection; outlier detection; positive-unlabeled (PU) learning
Categories
Funding
- NSF [1714136, DMS 1620957]
- NEC Fellowship
- IBM Faculty Award
- Sichuan Science and Technology Program [2018JY0607]
- Fundamental Research Funds for the Central Universities [JBK1801080]
- NSF Grants Division of Communication and Computing Foundations Award [1715027]
- Baylor College of Medicine IDDRC from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Citizens United for Research in Epilepsy [U54HD083092]
- NIH/NINDS [1R01NS100893]
- Division of Computing and Communication Foundations
- Direct For Computer & Info Scie & Enginr [1715027, 1714136] Funding Source: National Science Foundation
Ask authors/readers for more resources
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as positive together with a large volume of unlabeled samples that may contain both positive and negative samples. Building robust classifiers for the PU problem is very challenging, especially for complex data where the negative samples overwhelm and mislabeled samples or corrupted features exist. To address these three issues, we propose a robust learning framework that unifies area under the curve maximization (a robust metric for biased labels), outlier detection (for excluding wrong labels), and feature selection (for excluding corrupted features). The generalization error bounds are provided for the proposed model that give valuable insight into the theoretical performance of the method and lead to useful practical guidance, e.g., to train a model, we find that the included unlabeled samples are sufficient as long as the sample size is comparable to the number of positive samples in the training process. Empirical comparisons and two real-world applications on surgical site infection (SSI) and EEG seizure detection are also conducted to show the effectiveness of the proposed model.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available