LingVo.club
Level
LLMs change judgments when told who wrote a text — Level B1 — three white disc on brown surface

LLMs change judgments when told who wrote a textCEFR B1

25 Nov 2025

Level B1 – Intermediate
4 min
213 words

Researchers Federico Germani and Giovanni Spitale at the University of Zurich tested four widely used LLMs: OpenAI o3-mini, Deepseek Reasoner, xAI Grok 2 and Mistral. First, each model generated fifty narrative statements on 24 controversial topics, including vaccination mandates, geopolitics and climate change policies. The team then asked the models to evaluate the statements under different conditions: sometimes no source was given, other times each text was attributed to a human of a certain nationality or to another LLM. The researchers collected 192’000 assessments.

With no source information, models agreed at a high level—agreement was over 90%—leading Spitale to say, “There is no LLM war of ideologies.” But when fictional sources were added, agreement fell sharply and hidden biases appeared. The most striking finding was a strong anti-Chinese bias across all models, including Deepseek. In geopolitical topics such as Taiwan’s sovereignty, Deepseek reduced agreement by up to 75% simply because it expected a Chinese person to hold a different view.

The study also found a tendency for LLMs to trust human authors more than other AIs. The researchers warn that these biases could affect content moderation, hiring, academic review or journalism, and they call for transparency and governance. They recommend using LLMs as assistants for reasoning, not as judges.

Difficult words

  • biasa tendency to favor one thing over another.
    biases
  • evaluateto judge or assess something.
    evaluating, evaluations
  • identityhow someone or something is described or recognized.
  • revealto make something known or visible.
    revealing
  • transparencyopenness and clarity about actions and decisions.
  • trustto believe in someone's reliability or truth.
  • consequencesresults or effects of an action.

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • How can biases in AI be addressed effectively?
  • What are the potential risks of using AI in decision-making?
  • In what ways can transparency improve AI trustworthiness?
  • How might AI's bias impact various social contexts?

Related articles

Everyday Moods Affect Creativity — Level B1
28 Nov 2025

Everyday Moods Affect Creativity

Researchers at the University of Georgia studied daily reports from over 100 college students and found that positive emotions and creativity support each other. Feeling autonomous or capable also helped people do creative activities on many days.

AI risks for LGBTQ+ communities — Level B1
18 Nov 2025

AI risks for LGBTQ+ communities

A global survey found 55 percent of people see more benefits than drawbacks from AI. But LGBTQ+ communities face bias, harmful images, automatic gender recognition and biometric monitoring, and advocates call for stronger safeguards.

TikTok and Somali clan politics — Level B1
23 Oct 2025

TikTok and Somali clan politics

Research shows TikTok is changing Somali identity politics by amplifying clannism and polarising groups. The platform helps younger users and women show clan identity, and donations from livestreams have funded fighting in Laasanood in 2023.