India’s growing use of AI raises privacy concernsCEFR B2
23 Apr 2026
Adapted from Rezwan, Global Voices • CC BY 3.0
Photo by ADITYA PRAKASH, Unsplash
India’s rapid adoption of AI has raised major concerns about privacy, civil liberties and state power. The India AI Impact Summit in February 2026 brought global leaders, technology firms and civil society together, while Delhi Police used extensive AI-enabled surveillance for the event—deploying hundreds of cameras at the exhibition centre, thousands across central Delhi, multiple control rooms, AI-enabled smart glasses and large numbers of personnel to match faces against police databases and generate instant alerts.
Investigations and rights groups have documented widespread deployments. Project Panoptic recorded over a hundred government contracts for facial recognition by 2024, and reporting from 2025 shows growing AI use in travel and public spaces. The DigiYatra app links Aadhaar IDs, boarding passes and face biometrics for airports; the Internet Freedom Foundation reports passengers are often pushed to enrol and highlights opaque data practices, noting that around 75 percent of Digi Yatra Foundation shares are privately owned, which places the body outside the Right to Information Act, 2015.
Independent probes by media outlets found that facial recognition can fail for women whose appearance changes after pregnancy, illness or ageing. India’s Integrated Child Development Services, which serves about 47 million pregnant women, nursing mothers and young children, saw almost half of its intended beneficiaries miss food rations by the end of 2025 after a facial recognition step was introduced in July 2025 and the system did not match many faces.
On policy, India relies on a mix of existing laws and non-binding guidance: MeitY issued India AI Governance Guidelines in November 2025, the Artificial Intelligence (Ethics and Accountability) Bill, 2025 would create ethics reviews and bias audits but has not been enacted, and the DPDP Rules 2025 impose consent and data limits. MeitY’s IndiaAI Mission, approved in March 2024, funds projects on bias mitigation, privacy tools and deepfake detection, yet rights groups say safeguards remain voluntary and state surveillance continues without binding protections. International bodies and advocates urge mandatory human-rights due diligence, pre-deployment impact assessments and clearer oversight to reduce the risk of digital authoritarianism.
- Human-rights impact assessments before high-risk AI deployment
- Public disclosure of systems, training data and error rates
- Legal oversight and remedies for misuse of surveillance AI
Difficult words
- surveillance — close monitoring of people or public places
- facial recognition — technology that identifies people from face images
- consent — permission given by a person for using data
- bias — unfair or systematic error in data or decisions
- oversight — official supervision or control of activities
- impact assessment — study of likely effects before a project startsimpact assessments
- digital authoritarianism — use of technology to increase state control and repression
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How might errors in facial recognition affect people’s access to public services? Give examples from the article.
- Which of the three recommended safeguards at the end would you prioritise, and why?
- Are voluntary safeguards enough to prevent misuse of surveillance AI? Explain your view with reasons.
Related articles
Romani communities in Greater São Paulo seek recognition and services
Romani people living in the outskirts of Greater São Paulo face prejudice, poor living conditions and difficulty accessing services. Community leaders want official recognition, inclusion in the census and teaching of Romani history and culture in schools.
AI and citizen photos identify Anopheles stephensi in Madagascar
Scientists used AI and a citizen photo from the GLOBE Observer app to identify Anopheles stephensi in Madagascar. The study shows how apps, a 60x lens and a dashboard can help monitor this urban malaria mosquito, but access and awareness limit use.
Small pause to slow misinformation on social media
Researchers at the University of Copenhagen propose a small pause before sharing on platforms like X, Bluesky and Mastodon. A computer model shows that a short delay plus a brief learning step can reduce reshares and improve shared content quality.