AI sexual images without consent in BrazilCEFR A2
28 Apr 2026
Adapted from Fernanda Canofre, Global Voices • CC BY 3.0
Photo by KOBU Agency, Unsplash
In November 2023 parents in Rio de Janeiro reported teenagers making and sharing AI-generated nudes of classmates. In September 2024 a group of teens in Bahia were suspected of using AI to create pornographic images of classmates. In Mato Grosso students were expelled after sharing AI images of a teacher and students in online pornography communities.
These incidents appear in a technical note published by the independent research centre Internetlab in early April, 2026. The note looks at online violence against women and girls and calls for regulation of AI and digital platforms. Internetlab warns that databases used to train AI can be biased and may reproduce gender violence.
The research also shows a sharp rise in sexual deepfakes and that most targets are women. Internetlab asks for safety-by-design rules, clearer platform obligations and digital literacy education.
Difficult words
- generate — to make something, often by a machineAI-generated
- pornographic — about sexual images or material
- deepfake — fake image or video made by AIdeepfakes
- biased — not fair; shows an unfair opinion or result
- regulation — a rule or law to control something
- digital literacy — skills to use the internet and devices
- expel — to make someone leave school or a groupexpelled
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Why is digital literacy education important to reduce online harm?
- What rules or changes would make AI and platforms safer for people?
- How should a school respond if students share AI-generated images of classmates?
Related articles
Study finds flaws in cloud password managers
Researchers at ETH Zurich tested three cloud-based password managers and found multiple attacks that could expose or change users' passwords. They followed responsible disclosure, gave companies time to fix the issues, and recommended stronger encryption and audits.
LLMs change judgments when told who wrote a text
Researchers at the University of Zurich found that large language models change their evaluations of identical texts when given an author identity. The study tested four models and warns about hidden biases and the need for governance.