AI sexual images without consent in BrazilCEFR B1
28 Apr 2026
Adapted from Fernanda Canofre, Global Voices • CC BY 3.0
Photo by KOBU Agency, Unsplash
Recent cases in Brazil show how AI can create sexual images without consent. In November 2023 parents in Rio de Janeiro reported teenagers making and sharing AI-generated nudes of classmates. In September 2024 a group of teens in Bahia were suspected of using AI to make pornographic images of classmates, and in Mato Grosso students were expelled after sharing AI images of a teacher and students in online pornography communities.
The independent research centre Internetlab published a technical note in early April, 2026. The note examines online violence against women and girls and calls for regulation of AI and digital platforms. Internetlab highlights that biased training databases can make AI reproduce or amplify gender violence.
Internetlab cites research showing a sharp rise in sexual deepfakes. A study by Security Hero found that sexually explicit deepfakes make up 98 percent of deepfake videos online and that 99 percent of targets are women. The study also reported a 464 percent increase in sexual deepfakes between 2022 and 2023. Internetlab recommends safety-by-design rules, prohibition of non-consensual sexual deepfakes as an "excessive risk", curriculum guidelines and clearer platform accountability.
Difficult words
- consent — permission to do something, especially sexual activity
- pornographic — relating to sexual content meant to sexually arouse
- deepfake — synthetic video or image that impersonates a persondeepfakes
- biased — showing unfair preference or prejudice against groups
- regulation — official rules or laws that control actions
- accountability — responsibility to explain actions and accept consequences
- amplify — make something stronger or larger in effect
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Should schools punish students who create or share non-consensual AI sexual images? Why or why not?
- What steps could online platforms take to reduce the spread of sexual deepfakes?
- How can biased training data increase harm against women and girls, and what could change this?
Related articles
Reducing unsafe responses in large language models
Researchers studied how large language models (LLMs) handle safety and tested training methods to reduce unsafe outputs while keeping performance. They identified key challenges and a technique that preserves safety during fine-tuning.