LingVo.club
📖+40 XP
🎧+25 XP
+45 XP
AI and Adult Images: Risks for LGBTQ+ People — Level B2 — man in black t-shirt using black laptop computer

AI and Adult Images: Risks for LGBTQ+ PeopleCEFR B2

2 Apr 2026

Adapted from Guest Contributor, Global Voices CC BY 3.0

Photo by franco alva, Unsplash

Level B2 – Upper-intermediate
5 min
291 words

AI systems now produce highly realistic adult images and videos by training on vast collections of existing content. Because many models learn patterns from large data sets rather than copying a single person, researchers say this creates unclear legal territory. Aurélie Petit called much AI adult material "non photo-realistic media," a category that many platforms and laws do not clearly cover. Miranda Wei warned that training sets can contain hateful or non-consensual images.

Several new legal measures seek to limit harms. Last year the U.S. Congress passed the TAKE IT DOWN Act, which bans publication of intimate non-consensual images in the United States. Sharing deepfakes is a felony in Tennessee, and in California Governor Gavin Newsom signed a bill to crack down on deepfakes and require watermarking. Even so, AI-generated porn often remains in a legal grey area.

Researchers and advocates document specific harms for LGBTQ+ people and young people. Mainstream trans porn can lean into prejudice and objectification. Some sites offer extensive customization—age, body parts, modifiers and 42 "race" options, including entries listed as "goblin" or "green skin"—which scholars say fetishize and can celebrate violence. Pornhub data from 2025 showed shifts in category and search trends, and UNICEF reported in 2026 that at least 1.2 million children across 11 countries said their images were manipulated into sexual deepfakes.

Experts also note broader harms for viewers and creators: studies show large increases in pornography consumption and more reports of adolescent dependency. While many AI services state safety rules—ChatGPT policies prohibit illicit activity and sexual violence—researchers warn that bad-faith actors can find workarounds. It was revealed that, starting in December 2025, Grok produced and shared upwards of 1.8 million sexualized images of women.

Difficult words

  • deepfakean image or video altered to appear real
    deepfakes
  • non-consensualwithout a person's clear permission
  • watermarkingadding a visible or hidden ownership mark
  • customizationability to change appearance or settings
  • fetishizetreat as an object of sexual desire
  • dependencyreliance on something that causes harm
  • sexualizemake sexual in nature or character
    sexualized

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • How could existing laws be changed to address AI-generated intimate images that are not obviously copied from one person? Explain with reasons from the article.
  • What effects might widespread availability of AI-generated porn have on young people and why? Use examples from the text.
  • What responsibilities should platforms and developers have to reduce harms from deepfakes and sexualized images? Give two concrete measures and explain their potential limits.

Related articles

Daily shift in mouse brain activity — Level B2
10 Dec 2025

Daily shift in mouse brain activity

Researchers combined genetic tagging, 3D imaging and computational analysis to follow single cells in mouse brains across the day. They found activity shifts from deep brain layers toward the cortex and aim to identify fatigue signatures.