Social media: help and harmCEFR B2
10 Nov 2025
Adapted from Safa, Global Voices • CC BY 3.0
Photo by Mariia Shalabaieva, Unsplash
Social media plays a growing role in how people get news and form communities. For many from marginalized groups, platforms offer access to support and connection. Yet the same technologies can amplify hate speech, lies and actions that cause real-world harm. Experts note that design choices often increase both benefits and risks at once.
In January 2025 Mark Zuckerberg said Meta would end its third-party fact-checking program and shift to a “community notes” model like X, and that it would end some policies that protect LGBTQ+ users. The International Fact-Checking Network called the end of Meta’s nine-year fact-checking program "a step backward," and the UN High Commissioner for Human Rights, Volker Türk, warned that allowing hate speech and harmful content online has real-world consequences.
Researchers point to platform mechanics that reward clicks and engagement. One study found the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, and a leaked 2019 Facebook report said product features such as virality, recommendations and engagement optimization help misinformation and hate speech to flourish. Algorithms decide what people see and are hard to change; corrections rarely get the same reach as the original false claims, so the initial harm often remains.
Generative AI and automated systems add further risks. Indonesia’s 2024 elections showed AI-generated digital avatars in wide use, and a former candidate, Prabowo Subianto, used one widely on social media. A 2023 Freedom House report warned that automated systems enable more precise censorship and that "purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern." Examples from Venezuela and small pages such as "Shrimp Jesus" show how online content can still erode trust and feed data brokers, with real consequences for scams and influence operations. Overall, the balance between protection and harm depends on design choices, power structures and who controls the tools.
Difficult words
- marginalized — People or groups pushed to social or political edges.
- amplify — Make something stronger, larger, or more noticeable.
- fact-checking — Checking whether news and claims are true.
- virality — The quality of spreading quickly to many people.
- algorithm — A set of rules that controls computer choices.Algorithms
- disinformation — False or misleading information shared on purpose.
- engagement — User actions such as likes, comments and shares.
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could ending third-party fact-checking affect people who rely on platforms for support and connection?
- What design changes could social platforms make to reduce harm while keeping benefits for communities?
- How might AI-generated images and audio change how people trust information online?
Related articles
Table tennis players plant trees to help Safashahr
In Safashahr a local table tennis association links sport and nature. Players, families and residents planted drought‑resistant trees and ran volunteer campaigns to help a worsening water crisis. The group plans further action with local authorities.
Latin American groups build AI to study gender violence
Groups in Latin America create open, local AI tools to study gender inequalities and violence. Projects like AymurAI search court documents, protect sensitive data on local servers and help governments and civil society with evidence.
Khaled Khella shows hidden struggles
Khaled Khella is an independent Egyptian filmmaker whose short films examine desire, power and everyday survival. He gained international attention with festival-screened shorts and the film "Egyptian Misery" and continues to address urgent social issues.
Most young users still smoke nicotine, tobacco or cannabis
A 2022–23 study of people aged 12–34 found most young Americans who use nicotine, tobacco or cannabis still smoke one or more combustible products. The research groups users by their usual product patterns and urges targeted prevention.
LLMs change judgments when told who wrote a text
Researchers at the University of Zurich found that large language models change their evaluations of identical texts when given an author identity. The study tested four models and warns about hidden biases and the need for governance.