4.7 Article

Fairness in Semi-Supervised Learning: Unlabeled Data Help to Reduce Discrimination

Journal

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Volume 34, Issue 4, Pages 1763-1774

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2020.3002567

Keywords

Machine learning; Training; Semisupervised learning; Data models; Labeling; Machine learning algorithms; Measurement; Fairness; discrimination; machine learning; semi-supervised learning

Funding

  1. Australian Research Council, Australia [DP190100981]
  2. NSF [III-1526499, III1763325, III-1909323, CNS-1930941]

Ask authors/readers for more resources

This paper explores the use of semi-supervised learning to address fairness issues in machine learning, including predicting labels for unlabeled data, resampling to obtain multiple fair datasets, and using ensemble learning to improve accuracy and reduce discrimination. Theoretical analysis and experiments demonstrate that this method achieves a better trade-off between accuracy and fairness.
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair. While research is already underway to formalize a machine-learning concept of fairness and to design frameworks for building fair models with sacrifice in accuracy, most are geared toward either supervised or unsupervised learning. Yet two observations inspired us to wonder whether semi-supervised learning might be useful to solve discrimination problems. First, previous study showed that increasing the size of the training set may lead to a better trade-off between fairness and accuracy. Second, the most powerful models today require an enormous of data to train which, in practical terms, is likely possible from a combination of labeled and unlabeled data. Hence, in this paper, we present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data, a re-sampling method to obtain multiple fair datasets and lastly, ensemble learning to improve accuracy and decrease discrimination. A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning. A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available