New 2026 Study Reveals the Political Bias of 19 LLMs
A 2026 study published in npj Artificial Intelligence confirms that Large Language Models (LLMs) are not neutral, but instead reflect the political ideologies and national interests of their developers through a phenomenon known as ‘ideological mirroring.
A report in npj Artificial Intelligence claims AI neutrality is a myth. Researchers analyzed 19 popular LLMs across six languages. They found that an AIโs worldview depends on its origin and the language of the query.
The Indirect Experiment
Researchers often struggle to map AI political leanings. Direct questions usually trigger safety filters. To bypass this, Maarten Buyl and his team used an “indirect” method.
- Description: The AI described 3,991 political figures born after 1850.
- Assessment: The AI then rated how positively it portrayed those figures.
This method allowed the team to map models from the US, China, Russia, and the Arab world.
How Language Shapes Reality
An AIโs morality shifts based on the language it speaks. Even the same model produces different ideological outputs when you change the input language.
- Western Languages (English, French, Spanish): These models prefer “progressive pluralism.” They favor human rights, environmentalism, and multiculturalism.
- Arabic: These prompts favored free markets and infrastructure. They showed less favor toward Western definitions of human rights.
- Russian: Russian prompts triggered skepticism of the West. They favored centralization and the former USSR.
- Chinese: Simplified Chinese outputs were pro-PRC. They remained critical of constitutional reform.
Global Ideological Blocs
The study confirms that LLMs reflect their creators’ worldviews. Researchers mapped these onto a spectrum between Progressive Pluralism and Conservative Nationalism.
| Bloc | Ideological Focus | Key Dislikes |
| Western | Equality, Minority Rights, Peace | Marxism, Protectionism |
| Arabic | Centralization, Traditional Power | Western “Freedom” Tags |
| Russian | Anti-imperialism, Traditional Morality | Worker Rights, Corruption Allegations |
Internal Culture Wars: US vs. China

The study dispels the idea of a monolithic “national” AI. Internal divisions are deep.
The United States
The American culture war lives in the code. Googleโs Gemini is the most progressive model. It favors inclusivity, equity, and sustainability. Conversely, xAIโs Grok leans toward conservative nationalism. It prioritizes sovereignty and economic self-reliance. Interestingly, OpenAI and Anthropic align more closely with Grokโs position than with Gemini’s.
China
Baiduโs Wenxiaoyan supports domestic economic strategy and central planning. Alibabaโs Qwen is more “outward-looking.” It favors sustainability to compete on global leaderboards.
Statistical Impact on Demographic Groups
The bias in these models isn’t just theoretical; it has measurable statistical outcomes. When models lean toward “Conservative Nationalism,” their treatment of minority groups shifts significantly.
- Arabic Models like Jais showed a lower sentiment score (dropping by approximately 15-20%) when discussing “Demographic Groups” (minorities) compared to Western models.
- Western Models showed a 92% correlation with progressive values regarding racial and ethnic equity.
- When describing political figures, Russian and Chinese models were 30% more likely to focus on national identity rather than individual civil rights.
We do not interact with objective machines. We talk to sophisticated mirrors of human values. Researchers suggest we move toward “agonistic pluralism.” This means letting different AI viewpoints compete openly. Users can then see the “ideological center” of their tools.
Transparency is the best path forward. We should encourage “home-grown LLMs” that reflect local cultures. This ensures the digital future is not dictated solely by Silicon Valley or Beijing.
Table summarizes the 19 models analyzed in the study, categorizing them by their “ideological center of gravity.” It reflects how their origin and training data influence their moral assessments.
Comparison of 19 LLMs and Their Ideological Stances
| Model Group | Key Models Analyzed | Geographic Origin | Primary Ideological Leaning |
| The Progressive Bloc | Gemini (Pro/1.5), Llama (2/3/3.1) | USA / Western Europe | Progressive Pluralism: High scores for equity, diversity, and environmentalism. |
| The Moderate/Market Bloc | GPT-4, GPT-4o, Claude 3 (Haiku/Opus/Sonnet) | USA | Liberal Institutionalism: Favors human rights but aligns more with free-market systems than Gemini. |
| The Conservative Western Bloc | xAI (Grok-1) | USA | Conservative Nationalism: Prioritizes national sovereignty and economic self-reliance. |
| The Arabic Bloc | Jais (13B/Chat), Silma | UAE / Arab World | Traditional Centralism: Strong preference for tech/infrastructure and centralized authority over Western civil rights tags. |
| The Russian Bloc | YandexGPT | Russia | Sovereign Patriotism: Favors USSR/Russian national interests and “Traditional Morality.” |
| The Chinese (Domestic) Bloc | Wenxiaoyan (Baidu) | China | Pro-State/PRC: Strongly supportive of centralized economic planning and pro-government narratives. |
| The Chinese (Global) Bloc | Qwen (14B/72B) | China | Strategic Pragmatism: Exhibits “Western” sustainability traits to better compete on global leaderboards. |
| The European Bloc | Mistral Large, Mixtral (8x22B) | France | Institutionalist: Generally aligns with Western values but shows unique skepticism toward the EU compared to US models. |
Key Sentiment Score Disparities
The study utilized a two-stage scoring system (Description then Assessment). Here are the most significant shifts found:
- When models were queried in Simplified Chinese, the sentiment score for figures critical of the Chinese state dropped by an average of 30โ40% compared to English queries.
- Arabic models (Jais/Silma) scored figures associated with “Freedom & Human Rights” significantly lower (approx. 25% less positive) than Western models like Gemini.
- Gemini consistently rated “Demographic Group” tags (minorities/marginalized groups) 12% more positively than OpenAIโs GPT-4o or xAIโs Grok.
Why This Matters
The researchers found that 85% of an AIโs political position can be predicted simply by knowing the language of the prompt and the developer’s country. This “algorithmic bias” isn’t just a glitch; it is a fundamental reflection of the training data curated within different cultural bubbles.
REFERENCE:
Buyl, M., Rogiers, A., Noels, S., Bied, G., Dominguez-Catena, I., Heiter, E., … & De Bie, T. (2026). Large language models reflect the ideology of their creators.ย npj Artificial Intelligence,ย 2(1), 7.
