4.8 Article

Resolving content moderation dilemmas between free speech and harmful misinformation

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2210666120

Keywords

moral dilemma; harmful content; online speech; content moderation; conjoint experiment

Ask authors/readers for more resources

In online content moderation, protecting freedom of expression and preventing harm are conflicting values. Little is known about people's judgments and preferences in content moderation. We conducted a survey experiment with US respondents to understand their attitudes towards problematic social media posts on various topics. The majority prioritize removing harmful misinformation over protecting free speech. Partisan differences were observed, with Republicans being less willing to remove posts or penalize accounts across all scenarios. Our findings can inform the design of transparent content moderation rules for harmful misinformation.
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available