LingVo.club
📖+40 XP
🎧+25 XP
+45 XP
Human Rights as the Foundation for AI — Level B2 — a computer keyboard with a padlock on top of it

Human Rights as the Foundation for AICEFR B2

29 Apr 2026

Adapted from Guest Contributor, Global Voices CC BY 3.0

Photo by Sasun Bughdaryan, Unsplash

Level B2 – Upper-intermediate
6 min
339 words

AI is increasingly part of everyday life and can affect dignity, freedom and well-being. The author argues that AI must be built, used and governed around human rights so it supports broad human values rather than deepening existing power imbalances. If systems violate rights, people must have legal redress.

The article traces a long human-rights lineage to show why these ideas matter for digital systems. Early examples include the Cyrus Cylinder (539 BC), which records King Cyrus the Great freeing slaves and allowing religious choice. In the Middle Ages the Magna Carta limited royal power and introduced due process. In the 1700s thinkers such as John Locke argued for natural rights, ideas that influenced the United States’ Declaration of Independence (1776) and France’s Declaration of the Rights of Man and of the Citizen (1789). After World War II the 1948 Universal Declaration of Human Rights was adopted and later inspired the ICCPR and ICESCR.

From that history five non-negotiable rights are proposed as a basis for human-centred AI:

  • The right to life and liberty: AI must protect life, keep humans in control, and avoid use in militarisation or acts that could lead to genocide.
  • The right to equality: systems should be fair and anti-biased through steps such as addressing biased training data, performing bias audits, creating feedback loops and documenting systems for explainability and accountability.
  • The right to speak freely: users should know why information is promoted or hidden and AI should not marginalise languages or enable manipulation.
  • The right to essentials: AI can help distribute food, manage power grids and provide remote health care, but design and governance must aim for equitable access and avoid a digital divide.
  • The right to privacy: data minimisation is essential; techniques such as differential privacy and federated learning limit exposure, and people should have consent, data sovereignty and the right to be forgotten.

Grounding AI in the rights to life, equality, speech, essentials and privacy aims to make technology serve human needs and protect people from harm.

Difficult words

  • dignityinherent worth and respect of each person
  • redresslegal remedy or compensation for a wrong
  • due processlegal procedures that ensure fair treatment
  • biassystematic unfair preference or disadvantage
  • militarisationuse of technology or policy for military purposes
  • marginalisetreat as less important or push aside
  • data minimisationcollecting only necessary personal information
  • differential privacytechnique adding noise to protect individual data
  • federated learningtraining models across devices without central data sharing

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • How could legal redress help people affected by harmful AI systems in your country or community?
  • What steps could designers take to reduce bias in AI, and what challenges might they face?
  • How might data minimisation and techniques like differential privacy affect everyday services you use?

Related articles

AI and racial bias at US borders — Level B2
25 Apr 2026

AI and racial bias at US borders

Rights groups warn that AI is increasingly used in US border control and can deepen racial discrimination. A 2023 report to the UN says some systems harm migrants and calls for bans, oversight and diverse participation.