LingVo.club
Level
Latin American groups build AI to study gender violence — Level B2 — grayscale photography of women marching

Latin American groups build AI to study gender violenceCEFR B2

18 Nov 2025

Adapted from Martín De Ambrosio, SciDev CC BY 2.0

Photo by L'Odyssée Belle, Unsplash

Level B2 – Upper-intermediate
6 min
317 words

Across Latin America, activists and researchers are building open, locally hosted AI systems to document and reduce gender inequalities and violence. DataGénero, founded by Ivana Feldfeber, developed AymurAI, an open-source programme that searches court case documents for information relevant to gender-based violence. Installed on local servers to protect security and confidentiality, AymurAI collects material without interpreting it and transmits exactly what it finds to a database. The tool predates the rise of Chat GPT and has been used in courts in Argentina, Chile and Costa Rica since its introduction in 2021; it now includes data from more than 10,000 court rulings.

AymurAI received funding from Canada’s International Development Research Centre (IDRC) and the Patrick McGovern Foundation. The team plans an audio-to-text function which, once validated, could preserve testimonies so victims do not have to repeat traumatic events. The tool also anonymises sensitive details such as addresses to protect victims.

Other initiatives point to broader challenges. Derechos Digitales, led by Jamila Venturini, argues that many AI systems are created far from the region and reflect worldviews that do not match local demands on gender, race, age and ability; she says privacy, justice and equity must be built into AI by design. In Mexico, Cristina Martínez Pinto’s PIT Policy Lab worked with the state of Guanajuato to predict school dropouts and found 4,000 young people had been misidentified as not at risk. The team introduced open-source tools to detect bias and provided training for officials on human rights and gender in AI, calling for local public–private partnerships.

Computer scientist Daniel Yankelevich of Fundar emphasises that behaviour varies by culture and that predictive systems must be trained on local information to avoid exported biases. Across projects, common next steps include improving training data, adding technical functions like audio transcription, strengthening protection frameworks and promoting public policies to reduce harms from biased or opaque algorithms.

Difficult words

  • open-sourcesoftware with publicly available source code
  • anonymiseremove personal details so identities are not known
    anonymises
  • confidentialitykeeping information secret and limited to authorised people
  • biasunfair preference or systematic error in results
    biased, biases
  • predictiveused to forecast future events or behaviour
  • validatecheck that a method or tool is correct
    validated
  • transmitsend data from one place to another
    transmits

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • What are the benefits and possible risks of installing AI systems on local servers for sensitive cases? Give reasons from the article.
  • How effective is anonymisation likely to be in protecting victims, and what additional steps might be needed?
  • How can using local training data change the results of predictive systems in areas like education or justice?

Related articles

AI tools in Indian courts — Level B2
5 Dec 2025

AI tools in Indian courts

Indian courts are using AI for transcripts, research and translation while they face a backlog of several tens of millions of cases. The government and the Supreme Court plan modernization but judges and experts warn of risks.

Latin American groups build AI to study gender violence — English Level B2 | LingVo.club