4.7 Article

The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents?

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 3, Issue 4, Pages 424-433

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/T-AFFC.2012.29

Keywords

Affect processing; intelligent artificial agent; affect dilemma; ethics

Ask authors/readers for more resources

Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the Affect Dilemma for Artificial Agents, and more generally, artificial systems. In this paper, we discuss this dilemma in detail and argue that we should nevertheless develop affective artificial agents; in fact, we might be morally obligated to do so if they end up being the lesser evil compared to (complex) artificial agents without affect. Specifically, we propose five independent reasons for the utility of developing artificial affective agents and also discuss some of the challenges that we have to address as part of this endeavor.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available