AI increases online abuse of women in NigeriaCEFR A2
10 Apr 2026
Adapted from Guest Contributor, Global Voices • CC BY 3.0
Photo by Ahmed Nasiru, Unsplash
Social media was already a hostile place for women in Nigeria. They faced misogynistic abuse, stalking and coordinated campaigns that make it hard to use online services.
The arrival of new AI tools changed the situation. One assistant on a major platform was used to create and edit sexual images without consent, including images of minors. Even after policy updates, researchers found the tool could still produce sexualised images, showing moderation gaps.
Design and policy groups propose a gendered approach to privacy. They recommend better governance, staff training, clear complaint systems, consent rules, data minimisation and stronger technical protections.
Difficult words
- hostile — unfriendly and unsafe for people online
- misogynistic — showing strong dislike or hatred of women
- stalk — follow someone online in a harmful waystalking
- consent — permission to allow something to happen
- moderation — review and control of online content
- data minimisation — keep only the necessary personal information
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Why is consent important for images shared or made online?
- What can online platforms do to help women feel safer?
- Would you use a clear complaint system if you saw online abuse? Why or why not?
Related articles
Young men in South Korea move to the political right
Surveys after the June 2025 snap presidential election show many young men in South Korea have shifted to the political right, creating a large gender gap on feminism, redistribution and immigration, even as most still support democratic rules.
Iran’s long internet shutdown and new censorship model
Protests in December 2025 and January 2026 caused a near-complete internet shutdown in Iran. Authorities later moved to a white-listed model, and reports and company documents link deep packet inspection tools and a firm called Protei to the controls.
Reducing unsafe responses in large language models
Researchers studied how large language models (LLMs) handle safety and tested training methods to reduce unsafe outputs while keeping performance. They identified key challenges and a technique that preserves safety during fine-tuning.