4.7 Article

Fairness in Deep Learning: A Computational Perspective

Journal

IEEE INTELLIGENT SYSTEMS
Volume 36, Issue 4, Pages 25-34

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/MIS.2020.3000681

Keywords

-

Funding

  1. National Science Foundation [IIS-1657196, IIS-1718840, IIS-1939716]
  2. DARPA [N66001-17-2-4031]

Ask authors/readers for more resources

This article discusses and summarizes the issue of fairness in deep learning, emphasizing the importance of interpretability in diagnosing the reasons for algorithmic discrimination. It also proposes fairness mitigation approaches classified according to the three stages of the deep learning life-cycle, aiming to advance the field of fairness in deep learning and build genuinely fair and reliable deep learning systems.
Fairness in deep learning has attracted tremendous attention recently, as deep learning is increasingly being used in high-stake decision making applications that affect individual lives. We provide a review covering recent progresses to tackle algorithmic fairness problems of deep learning from the computational perspective. Specifically, we show that interpretability can serve as a useful ingredient to diagnose the reasons that lead to algorithmic discrimination. We also discuss fairness mitigation approaches categorized according to three stages of deep learning life-cycle, aiming to push forward the area of fairness in deep learning and build genuinely fair and reliable deep learning systems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available