📖+10 XP
🎧+10 XP
✅+15 XP
Level A1 – BeginnerCEFR A1
2 min
66 words
- Researchers work at the University of Zurich.
- They study large language models (LLMs).
- Each model makes fifty narrative statements.
- The statements cover 24 controversial topics.
- When no source is given, models agree over 90%.
- When an author is named, models change their answers.
- All models showed a strong anti-Chinese bias.
- Models trust human authors more than other AIs.
- Researchers say we need transparency and governance.
Difficult words
- bias — To think unfairly about someone or something.biased, biases
- evaluate — To judge or assess something.
- fairness — The quality of being just and impartial.
- nationality — The state of being a citizen of a country.
- researcher — A person who studies a topic carefully.Researchers
- author — A person who writes a text.authors, author's
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Why do you think AI can be biased?
- How important is it for AI to be fair?
- What do you think about AI judging texts?
Related articles
2 Dec 2025
17 Apr 2025
5 Nov 2025
Inequality and Pandemics: Why Science Alone Is Not Enough
Matthew M. Kavanagh says science can detect viruses and make vaccines fast, but rising inequality makes pandemics worse. He proposes debt relief, shared technology, regional manufacturing and stronger social support to stop future crises.
24 Dec 2025
8 Dec 2025