4.5 Article

The Thick Machine: Anthropological AI between explanation and explication

Journal

BIG DATA & SOCIETY
Volume 9, Issue 1, Pages -

Publisher

SAGE PUBLICATIONS INC
DOI: 10.1177/20539517211069891

Keywords

Thick description; machine learning; Clifford Geertz; computational anthropology; ethnoscience; explainable AI

Ask authors/readers for more resources

According to Clifford Geertz, the purpose of anthropology is to explicate culture rather than explaining it. This raises the question of how machine learning, which may not be able to explain itself, can still be valuable in the process of explication. In this study, the researchers trained a neural network to predict emoji reactions from Facebook comments using a dataset of 175K comments, and compared its performance with human players. The results showed that the machine can achieve similar accuracy as the players, fail in similar ways, and easily predictable emoji reactions are associated with unambiguous situations. The failures of the neural network are used to explore deeper and more ambiguous situations where interpretation is necessary. The researchers discuss how insights from anthropology can contribute to debates about explainable AI.
According to Clifford Geertz, the purpose of anthropology is not to explain culture but to explicate it. That should cause us to rethink our relationship with machine learning. It is, we contend, perfectly possible that machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use in a process of explication. Thus, we report on an experiment with anthropological AI. From a dataset of 175K Facebook comments, we trained a neural network to predict the emoji reaction associated with a comment and asked a group of human players to compete against the machine. We show that a) the machine can reach the same (poor) accuracy as the players (51%), b) it fails in roughly the same ways as the players, and c) easily predictable emoji reactions tend to reflect unambiguous situations where interpretation is easy. We therefore repurpose the failures of the neural network to point us to deeper and more ambiguous situations where interpretation is hard and explication becomes both necessary and interesting. We use this experiment as a point of departure for discussing how experiences from anthropology, and in particular the tension between formalist ethnoscience and interpretive thick description, might contribute to debates about explainable AI.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available