AI risks for LGBTQ+ communitiesCEFR B2
18 Nov 2025
Adapted from Aaron Spitler, Global Voices • CC BY 3.0
Photo by Igor Omilaev, Unsplash
Artificial intelligence is spreading in daily life and private investment in the field has soared over the past decade. A global Ipsos survey found 55 percent of respondents said AI-powered solutions offer more benefits than drawbacks. Companies often promote these tools for their efficiency and ease of use, yet many people remain worried about their risks.
Bias affects LGBTQ+ communities in several ways. Wired reported that image-generation tools such as Midjourney sometimes produced reductive and harmful images when asked to depict LGBTQ+ people. Internet data can contain stereotypes, and models trained on that data tend to reproduce them. UNESCO analysed common assumptions behind several large language models and concluded that widely used tools, including Meta's Llama 2 and OpenAI's GPT-2, were shaped by heteronormative attitudes and generated negative content about gay people more than half of the time in their simulations. Improved data labeling may help, but it may not remove all derogatory content from online sources.
Risks go beyond text and images. Forbidden Colours, a Belgian non-profit, described how "automatic gender recognition" (AGR) systems analyse audio‑visual material and use facial features or vocal patterns to infer gender. The group argues these measures cannot reveal how a person understands their own gender and are potentially dangerous. Politico Europe reported that Viktor Orbán sanctioned AI-enabled biometric monitoring at local Pride events, presented as a way to protect children from the "LGBTQ+ agenda." In practice the measure allows government and law enforcement to surveil artists, activists and ordinary citizens. European Union institutions are reviewing the policy.
Advocates call for partnerships between developers and LGBTQ+ stakeholders, stronger safeguards against surveillance misuse, and a ban on systems that detect or classify gender. They say input from LGBTQ+ people should be sought at all stages of tool development to reduce harms and increase the chances that AI is useful and fair for more people.
Difficult words
- soar — Increase quickly to a much higher levelsoared
- reductive — Too simple and missing important detail
- stereotype — Simplified and fixed idea about a groupstereotypes
- heteronormative — Assuming heterosexual relationships are the normal standard
- derogatory — Showing disrespect or insulting language about others
- surveil — Watch people or places, especially by authorities
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could partnerships between developers and LGBTQ+ stakeholders reduce harms from AI tools? Give examples.
- What are the possible dangers of using biometric monitoring at public events like Pride? Consider effects on activists and ordinary citizens.
- Do you think banning systems that detect or classify gender is practical and effective? Explain your reasons or suggest alternatives.
Related articles
AI and citizen photos identify Anopheles stephensi in Madagascar
Scientists used AI and a citizen photo from the GLOBE Observer app to identify Anopheles stephensi in Madagascar. The study shows how apps, a 60x lens and a dashboard can help monitor this urban malaria mosquito, but access and awareness limit use.
Belarus opens legal case against comedian Vyacheslav Komissarenko
The Investigative Committee of Belarus has opened special proceedings against comedian Vyacheslav Komissarenko for defamation and insult of President Alyaksandr Lukashenka. Komissarenko lives abroad and has faced visa refusals before obtaining a U.S. talent visa.