- Researchers work at the University of Zurich.
- They study large language models (LLMs).
- Each model makes fifty narrative statements.
- The statements cover 24 controversial topics.
- When no source is given, models agree over 90%.
- When an author is named, models change their answers.
- All models showed a strong anti-Chinese bias.
- Models trust human authors more than other AIs.
- Researchers say we need transparency and governance.
Difficult words
- bias — To think unfairly about someone or something.biased, biases
- evaluate — To judge or assess something.
- fairness — The quality of being just and impartial.
- nationality — The state of being a citizen of a country.
- researcher — A person who studies a topic carefully.Researchers
- author — A person who writes a text.authors, author's
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Why do you think AI can be biased?
- How important is it for AI to be fair?
- What do you think about AI judging texts?
Related articles
AI and citizen photos identify Anopheles stephensi in Madagascar
Scientists used AI and a citizen photo from the GLOBE Observer app to identify Anopheles stephensi in Madagascar. The study shows how apps, a 60x lens and a dashboard can help monitor this urban malaria mosquito, but access and awareness limit use.
AI to stop tobacco targeting young people
At a World Conference in Dublin (23–25 June), experts said artificial intelligence can help stop tobacco companies targeting young people online. They warned social media and new nicotine products draw youth into addiction, and poorer countries carry the heaviest burden.
UNESCO report finds gaps in education data
A UNESCO report published on 27 April finds important gaps in education data from poorer countries. It reviewed primary and secondary data in 120 countries but under‑represented low‑income nations and found no science assessment data in low‑income countries.