How words shape the debate about AICEFR B1
16 Apr 2026
Adapted from Daria Dergacheva, Global Voices • CC BY 3.0
Photo by Brett Jordan, Unsplash
The arrival of large language models after OpenAI released ChatGPT in November 2022 changed public debate about artificial intelligence. The discussion remained central in 2026 and covered both benefits and harms.
Generative AI disrupted education, provided new tools for some coders, and was used in war. Many AI companies still do not have profitable business models and struggle to offer clear proposals to businesses. At the same time, company leaders often promote anthropomorphising visions of their models.
Part of the confusion comes from how the technology is described. OpenAI said ChatGPT was "trained" on a large "corpus" using a "neural network" to generate "natural language." Errors are often called "hallucinations," but researchers call them statistical mistakes and say models can be wrong a substantial portion of the time.
Experts argue that journalists and policymakers should avoid marketing language and focus on safety, rights, and the value of human creativity and connection.
Difficult words
- generative — Able to produce new content or output.
- disrupt — To change a system quickly and disturb it.disrupted
- profitable — Making more money than it costs.
- anthropomorphise — To give human traits to non-human things.anthropomorphising
- corpus — A large collection of text used for study.
- neural network — A computer model that learns from data.
- hallucination — An answer a model gives that is not true.hallucinations
- statistical — Related to data, numbers, and patterns.
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How has AI started to change education or work where you live? Give one example and one concern.
- What problems can come from companies describing AI in human terms?
- How should journalists explain AI to help the public understand risks and benefits?
Related articles
Ecuador teams build tech to fight election disinformation
A revived Hacks Hackers chapter in Ecuador held a February conference and a hackathon to tackle electoral disinformation. Three winning teams — Goddard, VeritasAI and PillMind — received prizes, mentoring and support to develop prototypes.
Small pause to slow misinformation on social media
Researchers at the University of Copenhagen propose a small pause before sharing on platforms like X, Bluesky and Mastodon. A computer model shows that a short delay plus a brief learning step can reduce reshares and improve shared content quality.