LingVo.club
📖+40 XP
🎧+25 XP
+45 XP
India’s growing use of AI raises privacy concerns — Level B2 — person holding brown concrete arch

India’s growing use of AI raises privacy concernsCEFR B2

23 Apr 2026

Level B2 – Upper-intermediate
7 min
371 words

India’s rapid adoption of AI has raised major concerns about privacy, civil liberties and state power. The India AI Impact Summit in February 2026 brought global leaders, technology firms and civil society together, while Delhi Police used extensive AI-enabled surveillance for the event—deploying hundreds of cameras at the exhibition centre, thousands across central Delhi, multiple control rooms, AI-enabled smart glasses and large numbers of personnel to match faces against police databases and generate instant alerts.

Investigations and rights groups have documented widespread deployments. Project Panoptic recorded over a hundred government contracts for facial recognition by 2024, and reporting from 2025 shows growing AI use in travel and public spaces. The DigiYatra app links Aadhaar IDs, boarding passes and face biometrics for airports; the Internet Freedom Foundation reports passengers are often pushed to enrol and highlights opaque data practices, noting that around 75 percent of Digi Yatra Foundation shares are privately owned, which places the body outside the Right to Information Act, 2015.

Independent probes by media outlets found that facial recognition can fail for women whose appearance changes after pregnancy, illness or ageing. India’s Integrated Child Development Services, which serves about 47 million pregnant women, nursing mothers and young children, saw almost half of its intended beneficiaries miss food rations by the end of 2025 after a facial recognition step was introduced in July 2025 and the system did not match many faces.

On policy, India relies on a mix of existing laws and non-binding guidance: MeitY issued India AI Governance Guidelines in November 2025, the Artificial Intelligence (Ethics and Accountability) Bill, 2025 would create ethics reviews and bias audits but has not been enacted, and the DPDP Rules 2025 impose consent and data limits. MeitY’s IndiaAI Mission, approved in March 2024, funds projects on bias mitigation, privacy tools and deepfake detection, yet rights groups say safeguards remain voluntary and state surveillance continues without binding protections. International bodies and advocates urge mandatory human-rights due diligence, pre-deployment impact assessments and clearer oversight to reduce the risk of digital authoritarianism.

  • Human-rights impact assessments before high-risk AI deployment
  • Public disclosure of systems, training data and error rates
  • Legal oversight and remedies for misuse of surveillance AI

Difficult words

  • surveillanceclose monitoring of people or public places
  • facial recognitiontechnology that identifies people from face images
  • consentpermission given by a person for using data
  • biasunfair or systematic error in data or decisions
  • oversightofficial supervision or control of activities
  • impact assessmentstudy of likely effects before a project starts
    impact assessments
  • digital authoritarianismuse of technology to increase state control and repression

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • How might errors in facial recognition affect people’s access to public services? Give examples from the article.
  • Which of the three recommended safeguards at the end would you prioritise, and why?
  • Are voluntary safeguards enough to prevent misuse of surveillance AI? Explain your view with reasons.

Related articles