Across Latin America, activists and researchers are building open, locally hosted AI systems to document and reduce gender inequalities and violence. DataGénero, founded by Ivana Feldfeber, developed AymurAI, an open-source programme that searches court case documents for information relevant to gender-based violence. Installed on local servers to protect security and confidentiality, AymurAI collects material without interpreting it and transmits exactly what it finds to a database. The tool predates the rise of Chat GPT and has been used in courts in Argentina, Chile and Costa Rica since its introduction in 2021; it now includes data from more than 10,000 court rulings.
AymurAI received funding from Canada’s International Development Research Centre (IDRC) and the Patrick McGovern Foundation. The team plans an audio-to-text function which, once validated, could preserve testimonies so victims do not have to repeat traumatic events. The tool also anonymises sensitive details such as addresses to protect victims.
Other initiatives point to broader challenges. Derechos Digitales, led by Jamila Venturini, argues that many AI systems are created far from the region and reflect worldviews that do not match local demands on gender, race, age and ability; she says privacy, justice and equity must be built into AI by design. In Mexico, Cristina Martínez Pinto’s PIT Policy Lab worked with the state of Guanajuato to predict school dropouts and found 4,000 young people had been misidentified as not at risk. The team introduced open-source tools to detect bias and provided training for officials on human rights and gender in AI, calling for local public–private partnerships.
Computer scientist Daniel Yankelevich of Fundar emphasises that behaviour varies by culture and that predictive systems must be trained on local information to avoid exported biases. Across projects, common next steps include improving training data, adding technical functions like audio transcription, strengthening protection frameworks and promoting public policies to reduce harms from biased or opaque algorithms.
Difficult words
- open-source — software with publicly available source code
- anonymise — remove personal details so identities are not knownanonymises
- confidentiality — keeping information secret and limited to authorised people
- bias — unfair preference or systematic error in resultsbiased, biases
- predictive — used to forecast future events or behaviour
- validate — check that a method or tool is correctvalidated
- transmit — send data from one place to anothertransmits
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- What are the benefits and possible risks of installing AI systems on local servers for sensitive cases? Give reasons from the article.
- How effective is anonymisation likely to be in protecting victims, and what additional steps might be needed?
- How can using local training data change the results of predictive systems in areas like education or justice?
Related articles
When to Give a Child a Phone and Why Some Families Use Landlines
Child development experts say middle school is often a good time for a personal phone. Some parents choose a home landline because it limits apps and supports family conversations. Experts advise guided use rather than banning technology.