4.6 Review

A group of industry, academic, and government experts convene in Philadelphia to explore the roots of algorithmic bias

Journal

COMMUNICATIONS OF THE ACM
Volume 63, Issue 5, Pages 82-89

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3376898

Keywords

-

Funding

  1. National Science Foundation [1136993]
  2. Direct For Computer & Info Scie & Enginr
  3. Division of Computing and Communication Foundations [1136993] Funding Source: National Science Foundation

Ask authors/readers for more resources

THE LAST DECADE has seen a vast increase both in the diversity of applications to which machine learning is applied, and to the import of those applications. Machine learning is no longer just the engine behind ad placements and spam filters; it is now used to filter loan applicants, deploy police officers, and inform bail and parole decisions, among other things. The result has been a major concern for the potential for data-driven methods to introduce and perpetuate discriminatory practices, and to otherwise be unfair. And this concern has not been without reason: a steady stream of empirical findings has shown that data-driven methods can unintentionally both encode existing human biases and introduce new ones.(7,9,11,60)

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available