LingVo.club
📖+40 XP
🎧+25 XP
+45 XP
AI and racial bias at US borders — Level B2 — a group of people in a large room with a large screen

AI and racial bias at US bordersCEFR B2

25 Apr 2026

Level B2 – Upper-intermediate
6 min
314 words

Artificial intelligence is increasingly embedded in US border enforcement and interior immigration systems, and rights groups argue that these tools can reproduce and deepen racial discrimination at multiple stages. A 2023 report by the Black Alliance for Just Immigration and the Immigrant Rights Clinic and International Justice Clinic at UC Irvine was submitted to the UN Special Rapporteur on racism; it contends that AI-driven policies breach the United States’ obligations under the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), ratified in 1994.

The report documents a range of technologies. Autonomous towers, Anduril Towers and small unmanned aircraft systems track people before they reach a land border; the groups say increased “smart border” deployment has coincided with historically high migrant deaths and with framing displaced people as security threats. The CBP One app previously required selfies but failed to recognise darker skin tones far more often than white faces and lacked important language translations. The Automated Targeting System drew on databases to predict visa overstays and disproportionately flagged Nigerians when travel restrictions rose in 2020. Inside the United States, ICE uses predictive tools such as a “Hurricane Score” supplied by B.I. Incorporated and the RAVEn platform, which draws on data from offices across 56 countries. USCIS employs Asylum Text Analytics and an Evidence Classifier to screen claims and documents, which can disadvantage non-English speakers and applicants with atypical records.

The report urges a decolonial approach to AI, invoking Cosmo uBuntu and demanding African and diaspora participation in design and operation. Its recommendations include prompt notification and opt-out options, federal bans on racially discriminatory AI uses, independent oversight, public disclosure, stakeholder consultation, remedies for harms, and city pledges not to share data for DHS AI development. Until systems are demonstrably free of discrimination and include diverse perspectives, the authors argue AI should not be used at any border.

Difficult words

  • discriminationunfair treatment of people because of their race
    racial discrimination
  • embedincluded firmly inside a system or tool
    embedded
  • autonomousable to operate without human control
  • unmanned aircraft systema flying vehicle without a pilot on board
    unmanned aircraft systems
  • deploymentthe act of placing systems into operation
  • oversightsupervision that checks actions and enforces rules

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • Which recommendation from the report do you think would most reduce racial bias in AI systems, and why?
  • How might AI tools that fail to recognise darker skin tones affect migrants' access to services and protection?
  • What practical challenges could arise when including African and diaspora participation in AI design and operation?

Related articles