AI and racial bias at US bordersCEFR B2
25 Apr 2026
Adapted from UntoldMag, Global Voices • CC BY 3.0
Photo by Mathias Reding, Unsplash
Artificial intelligence is increasingly embedded in US border enforcement and interior immigration systems, and rights groups argue that these tools can reproduce and deepen racial discrimination at multiple stages. A 2023 report by the Black Alliance for Just Immigration and the Immigrant Rights Clinic and International Justice Clinic at UC Irvine was submitted to the UN Special Rapporteur on racism; it contends that AI-driven policies breach the United States’ obligations under the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), ratified in 1994.
The report documents a range of technologies. Autonomous towers, Anduril Towers and small unmanned aircraft systems track people before they reach a land border; the groups say increased “smart border” deployment has coincided with historically high migrant deaths and with framing displaced people as security threats. The CBP One app previously required selfies but failed to recognise darker skin tones far more often than white faces and lacked important language translations. The Automated Targeting System drew on databases to predict visa overstays and disproportionately flagged Nigerians when travel restrictions rose in 2020. Inside the United States, ICE uses predictive tools such as a “Hurricane Score” supplied by B.I. Incorporated and the RAVEn platform, which draws on data from offices across 56 countries. USCIS employs Asylum Text Analytics and an Evidence Classifier to screen claims and documents, which can disadvantage non-English speakers and applicants with atypical records.
The report urges a decolonial approach to AI, invoking Cosmo uBuntu and demanding African and diaspora participation in design and operation. Its recommendations include prompt notification and opt-out options, federal bans on racially discriminatory AI uses, independent oversight, public disclosure, stakeholder consultation, remedies for harms, and city pledges not to share data for DHS AI development. Until systems are demonstrably free of discrimination and include diverse perspectives, the authors argue AI should not be used at any border.
Difficult words
- discrimination — unfair treatment of people because of their raceracial discrimination
- embed — included firmly inside a system or toolembedded
- autonomous — able to operate without human control
- unmanned aircraft system — a flying vehicle without a pilot on boardunmanned aircraft systems
- deployment — the act of placing systems into operation
- oversight — supervision that checks actions and enforces rules
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Which recommendation from the report do you think would most reduce racial bias in AI systems, and why?
- How might AI tools that fail to recognise darker skin tones affect migrants' access to services and protection?
- What practical challenges could arise when including African and diaspora participation in AI design and operation?
Related articles
Algorithms show how catalysts turn propane into propylene
Researchers at the University of Rochester developed algorithms that explain how nanoscale catalysts convert propane to propylene. The work reveals atomic features of metallic and oxide phases and could help improve industrial production methods.