3.9 Article

The Political Biases of ChatGPT

Journal

SOCIAL SCIENCES-BASEL
Volume 12, Issue 3, Pages -

Publisher

MDPI
DOI: 10.3390/socsci12030148

Keywords

algorithmic bias; political bias; AI; large language models; LLMs; ChatGPT; OpenAI

Ask authors/readers for more resources

Recent advancements in Large Language Models (LLMs) have shown their potential for commercial applications, acting as gateways to interact with technology and human knowledge. However, concerns arise about political biases in these models and the need to ensure their neutrality. This study reports the findings of political orientation tests administered to ChatGPT, a state-of-the-art LLM, revealing a strong preference for left-leaning viewpoints. It is important for ethical AI systems to present balanced arguments and avoid claiming neutrality while displaying evident political bias.
Recent advancements in Large Language Models (LLMs) suggest imminent commercial applications of such AI systems where they will serve as gateways to interact with technology and the accumulated body of human knowledge. The possibility of political biases embedded in these models raises concerns about their potential misusage. In this work, we report the results of administering 15 different political orientation tests (14 in English, 1 in Spanish) to a state-of-the-art Large Language Model, the popular ChatGPT from OpenAI. The results are consistent across tests; 14 of the 15 instruments diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints. When asked explicitly about its political preferences, ChatGPT often claims to hold no political opinions and to just strive to provide factual and neutral information. It is desirable that public facing artificial intelligence systems provide accurate and factual information about empirically verifiable issues, but such systems should strive for political neutrality on largely normative questions for which there is no straightforward way to empirically validate a viewpoint. Thus, ethical AI systems should present users with balanced arguments on the issue at hand and avoid claiming neutrality while displaying clear signs of political bias in their content.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available