How words shape the debate about AICEFR B2
16 Apr 2026
Adapted from Daria Dergacheva, Global Voices • CC BY 3.0
Photo by Brett Jordan, Unsplash
The public debate around artificial intelligence shifted after OpenAI made ChatGPT available to billions in November 2022, and the topic remained central in 2026. This report is part of the series “Don't ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication, and GenderIT.
Generative AI has produced mixed effects: it disrupted education, offered new tools to some coders, and was deployed in conflict. Many AI companies still lack profitable business models and clear proposals for businesses, yet leaders continue to promote anthropomorphising visions of their systems. Several firms are linked to older tech oligarchs: Google is developing Gemini; Microsoft invested in Anthropic and OpenAI; Meta has Llama; Elon Musk bought and crushed Twitter and offers Grok; and Jeff Bezos invests in seven AI companies, including Perplexity AI and the Dutch startup Toloka.
Part of the public confusion comes from technical language and marketing. When OpenAI introduced ChatGPT it was described as "trained" on a large "corpus" with a "neural network" that generates "natural language." Errors were called "hallucinations," though experts describe them as statistical mistakes; some researchers estimate models are wrong in 25–30 percent of cases. Anthropic published "Claude’s Constitution," which treats the model’s behaviour in human terms. Legal scholar Luisa Jarovsky warned this risks giving AI undue moral or legal status, while Anthropic philosopher Dr. Amanda Askell said she was "building Claude’s personality." Researchers also note that LLMs are designed to write in first person and that synthetic voices aim to sound human. Caleb Sponheim observes unnecessary pleasantries, sycophantic agreement, and language that values engagement over utility. Linguists Emily Bender and Nanna Inie state: “AI is not your friend. Nor is it an intelligent tutor, an empathetic ear, or a helpful assistant. It can not ‘make up’ facts, and it does not make ‘mistakes.’ It does not actually answer your questions.”
Many experts argue the language should change. They say generative AI repeats human-produced patterns and works by probabilistic automation, so journalists and policymakers should avoid company marketing and instead focus on safety, rights, and valuing human creativity and connection.
Difficult words
- generative — creating new texts, images, or data automatically
- anthropomorphise — to give human traits to non-human thingsanthropomorphising
- oligarch — a very powerful business or tech leaderoligarchs
- corpus — a large collection of written or spoken texts
- neural network — a computer model that processes information
- hallucination — an output that is factually incorrecthallucinations
- probabilistic — based on likelihoods or statistical chance
- sycophantic — excessively flattering to gain approval
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could describing AI systems in human terms (for example, giving them a "personality") change the way people trust or use those systems? Give reasons.
- The report says many AI companies lack profitable business models. What effects might that have on the development or deployment of AI technologies?
- Do you agree that journalists should avoid company marketing when reporting on AI? Give examples or reasons based on the article or your own experience.
Related articles
Shared social media and changing networks in rural families
A study of rural students and one of their parents finds that university often increases who young people meet, while social media usually broadens networks. Sharing platforms between parents and children has mixed effects on network diversity and tolerance.