AI risks for LGBTQ+ communitiesCEFR B2
18 Nov 2025
Adapted from Aaron Spitler, Global Voices • CC BY 3.0
Photo by Igor Omilaev, Unsplash
Artificial intelligence is spreading in daily life and private investment in the field has soared over the past decade. A global Ipsos survey found 55 percent of respondents said AI-powered solutions offer more benefits than drawbacks. Companies often promote these tools for their efficiency and ease of use, yet many people remain worried about their risks.
Bias affects LGBTQ+ communities in several ways. Wired reported that image-generation tools such as Midjourney sometimes produced reductive and harmful images when asked to depict LGBTQ+ people. Internet data can contain stereotypes, and models trained on that data tend to reproduce them. UNESCO analysed common assumptions behind several large language models and concluded that widely used tools, including Meta's Llama 2 and OpenAI's GPT-2, were shaped by heteronormative attitudes and generated negative content about gay people more than half of the time in their simulations. Improved data labeling may help, but it may not remove all derogatory content from online sources.
Risks go beyond text and images. Forbidden Colours, a Belgian non-profit, described how "automatic gender recognition" (AGR) systems analyse audio‑visual material and use facial features or vocal patterns to infer gender. The group argues these measures cannot reveal how a person understands their own gender and are potentially dangerous. Politico Europe reported that Viktor Orbán sanctioned AI-enabled biometric monitoring at local Pride events, presented as a way to protect children from the "LGBTQ+ agenda." In practice the measure allows government and law enforcement to surveil artists, activists and ordinary citizens. European Union institutions are reviewing the policy.
Advocates call for partnerships between developers and LGBTQ+ stakeholders, stronger safeguards against surveillance misuse, and a ban on systems that detect or classify gender. They say input from LGBTQ+ people should be sought at all stages of tool development to reduce harms and increase the chances that AI is useful and fair for more people.
Difficult words
- soar — Increase quickly to a much higher levelsoared
- reductive — Too simple and missing important detail
- stereotype — Simplified and fixed idea about a groupstereotypes
- heteronormative — Assuming heterosexual relationships are the normal standard
- derogatory — Showing disrespect or insulting language about others
- surveil — Watch people or places, especially by authorities
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could partnerships between developers and LGBTQ+ stakeholders reduce harms from AI tools? Give examples.
- What are the possible dangers of using biometric monitoring at public events like Pride? Consider effects on activists and ordinary citizens.
- Do you think banning systems that detect or classify gender is practical and effective? Explain your reasons or suggest alternatives.
Related articles
2025 aid cuts threaten health and humanitarian services
Large reductions in international aid in 2025 disrupted health and humanitarian services in many low- and middle-income countries. The cuts began with a US suspension of aid and led to the closure of USAID and wider global impacts.
Belarus opens legal case against comedian Vyacheslav Komissarenko
The Investigative Committee of Belarus has opened special proceedings against comedian Vyacheslav Komissarenko for defamation and insult of President Alyaksandr Lukashenka. Komissarenko lives abroad and has faced visa refusals before obtaining a U.S. talent visa.
Farzana Sithi and the struggle for women’s rights after the 2024 uprising
Farzana Sithi, a student activist from Jessore, became prominent during the July–August 2024 youth uprising. She says little progress followed, reports rising violence and discrimination since August 5, 2024, and criticises the interim government.