AI sexual images without consent in BrazilCEFR B2
28 Apr 2026
Adapted from Fernanda Canofre, Global Voices • CC BY 3.0
Photo by KOBU Agency, Unsplash
Several recent incidents in Brazil illustrate how AI tools can be used to produce sexual images without consent. In November 2023 parents in Rio de Janeiro reported teenagers making and sharing AI-generated nudes of classmates. In September 2024 a group of teens in Bahia were suspected of using AI to create pornographic images of classmates, and in Mato Grosso students were expelled after sharing AI images of a teacher and students in online pornography communities.
The independent research centre Internetlab published a technical note in early April, 2026 examining online violence against women and girls. Internetlab warns that training databases can be biased and that AI may reproduce or amplify gender violence. It cites a Security Hero study that found sexually explicit deepfakes make up 98 percent of deepfake videos online, that 99 percent of targets are women, and that sexual deepfakes rose by 464 percent between 2022 and 2023.
Internetlab recommends several measures to reduce harm:
- apply safety-by-design rules so platforms build protections from the start;
- classify non-consensual sexual deepfakes as an "excessive risk" and prohibit their use;
- develop curriculum guidelines for digital literacy; and
- clarify platform accountability and legal obligations.
Public debate is already active: the Federal Supreme Court (STF) in 2025 partly struck down an article of the Marco Civil da Internet about platform responsibility, prompting calls for new regulation. Internetlab also notes a rise in bills to criminalise misogynistic behaviour and movements linked to the so-called manosphere, while warning that criminalisation alone may not be enough. As Brazil approaches an election season, Internetlab and its director Clarice Tavares warn that AI tools and chatbots such as Gemini, ChatGPT and Claude could reproduce gender bias and political violence against women, and it is not yet clear how fast new laws or platform rules will stop these harms.
Difficult words
- deepfake — digitally created or altered image or videodeepfakes
- bias — unfair tendency or preference affecting judgment
- amplify — make a problem or effect stronger or larger
- database — organized collection of electronic information or recordsdatabases
- misogynistic — showing hatred, dislike, or prejudice against women
- accountability — responsibility for actions, decisions, or consequences
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- What effects might non-consensual sexual deepfakes have on victims and their communities? Give reasons.
- Which of Internetlab’s recommended measures (safety-by-design, banning non-consensual deepfakes, digital literacy, clearer accountability) seems most effective to you? Why?
- How should online platforms balance free expression with preventing non-consensual sexual images and political violence online?
Related articles
Iran’s long internet shutdown and new censorship model
Protests in December 2025 and January 2026 caused a near-complete internet shutdown in Iran. Authorities later moved to a white-listed model, and reports and company documents link deep packet inspection tools and a firm called Protei to the controls.