Social media: help and harmCEFR B2
10 Nov 2025
Adapted from Safa, Global Voices • CC BY 3.0
Photo by Mariia Shalabaieva, Unsplash
Social media plays a growing role in how people get news and form communities. For many from marginalized groups, platforms offer access to support and connection. Yet the same technologies can amplify hate speech, lies and actions that cause real-world harm. Experts note that design choices often increase both benefits and risks at once.
In January 2025 Mark Zuckerberg said Meta would end its third-party fact-checking program and shift to a “community notes” model like X, and that it would end some policies that protect LGBTQ+ users. The International Fact-Checking Network called the end of Meta’s nine-year fact-checking program "a step backward," and the UN High Commissioner for Human Rights, Volker Türk, warned that allowing hate speech and harmful content online has real-world consequences.
Researchers point to platform mechanics that reward clicks and engagement. One study found the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, and a leaked 2019 Facebook report said product features such as virality, recommendations and engagement optimization help misinformation and hate speech to flourish. Algorithms decide what people see and are hard to change; corrections rarely get the same reach as the original false claims, so the initial harm often remains.
Generative AI and automated systems add further risks. Indonesia’s 2024 elections showed AI-generated digital avatars in wide use, and a former candidate, Prabowo Subianto, used one widely on social media. A 2023 Freedom House report warned that automated systems enable more precise censorship and that "purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern." Examples from Venezuela and small pages such as "Shrimp Jesus" show how online content can still erode trust and feed data brokers, with real consequences for scams and influence operations. Overall, the balance between protection and harm depends on design choices, power structures and who controls the tools.
Difficult words
- marginalized — People or groups pushed to social or political edges.
- amplify — Make something stronger, larger, or more noticeable.
- fact-checking — Checking whether news and claims are true.
- virality — The quality of spreading quickly to many people.
- algorithm — A set of rules that controls computer choices.Algorithms
- disinformation — False or misleading information shared on purpose.
- engagement — User actions such as likes, comments and shares.
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could ending third-party fact-checking affect people who rely on platforms for support and connection?
- What design changes could social platforms make to reduce harm while keeping benefits for communities?
- How might AI-generated images and audio change how people trust information online?
Related articles
Southern Trinidad villages face risks from U.S.–Venezuela tensions
Villages at Trinidad’s southern tip now face greater danger at sea as U.S. naval and air activity and Venezuelan warnings increase. Fishermen, migrants and local services feel the pressure; officials seek better safety and communication.
China starts campaign to remove 'negative' online content
On September 22 China's Cyberspace Administration began a two-month campaign to remove online posts it calls antagonistic or negative. Platforms removed posts, banned influencers and targeted fan groups, trolls and so-called low-consumption vloggers.