📖+10 XP
🎧+10 XP
✅+15 XP
Level A1 – BeginnerCEFR A1
2 min
66 words
- Researchers work at the University of Zurich.
- They study large language models (LLMs).
- Each model makes fifty narrative statements.
- The statements cover 24 controversial topics.
- When no source is given, models agree over 90%.
- When an author is named, models change their answers.
- All models showed a strong anti-Chinese bias.
- Models trust human authors more than other AIs.
- Researchers say we need transparency and governance.
Difficult words
- bias — To think unfairly about someone or something.biased, biases
- evaluate — To judge or assess something.
- fairness — The quality of being just and impartial.
- nationality — The state of being a citizen of a country.
- researcher — A person who studies a topic carefully.Researchers
- author — A person who writes a text.authors, author's
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Why do you think AI can be biased?
- How important is it for AI to be fair?
- What do you think about AI judging texts?
Related articles
23 Sept 2025
6 Dec 2025
People with AMD Judge Car Arrival Times Like Others
A virtual reality study compared adults with age-related macular degeneration (AMD) and adults with normal vision. Both groups judged vehicle arrival times similarly; vision and sound were used together, and a multimodal benefit did not appear.
16 Oct 2025
Researchers Call for Clear Rules on Gene-Edited Crops in Mexico
Mexican researchers want rules that distinguish gene-edited crops from GMOs. They launched a petition asking the government for evidence-based regulation while warning a March decree banning genetically modified maize could also affect gene editing.
6 Dec 2025
8 Dec 2025