When AI Favors Profit over PeopleCEFR B2
21 Apr 2026
Adapted from Guest Contributor, Global Voices • CC BY 3.0
Photo by Marija Zaric, Unsplash
The piece appears in the series “Don't ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication (APC) and GenderIT, and is part of Global Voices' April 2026 Spotlight series, "Human perspectives on AI." The author, Hija Kamran, says her work makes her cautious about new technologies and that she has often been a late adopter.
Kamran argues technology is not neutral and that tech companies repeatedly show their primary commitment is to business models rather than to people. To illustrate this, she cites the remark "Senator, we run ads," attributed to Mark Zuckerberg, and reports a company representative told her, "I encourage people to read our terms of service." She sees these responses as evidence of limited transparency and accountability.
Training data for AI systems is drawn from the internet and public records, and it reflects histories of exclusion, racism, sexism and economic inequality. When models learn from that data, they can encode and amplify existing harms while presenting outputs as neutral. Corporate incentives — profit motives, shareholders and growth targets — influence which problems are prioritised, how fast products are deployed, and whose land, knowledge or lives become "collateral damage." Kamran also highlights the risk of dehumanisation, especially in militarised contexts where people can be reduced to data points and treated as targets.
AI systems generate outputs from probabilities and do not understand context, history or responsibility; they can imitate human patterns but cannot be human, care or hold relationships. Kamran calls for a human rights approach that shifts accountability to those with power and urges scepticism early in the development and commercialisation of technologies. She recommends asking:
- Who built the system?
- How does the system work?
- Who benefits from it?
Hija Kamran is the lead editor of GenderIT.org and an advocacy strategist within APC’s Women’s Rights Programme.
Difficult words
- accountability — duty to explain or accept responsibility
- transparency — open and clear information about actions
- encode — convert information into a coded form
- amplify — make something stronger or more noticeable
- collateral damage — unintended harm to people or things
- dehumanisation — treating people as less than human
- probability — measure of how likely something isprobabilities
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How might asking "Who benefits from it?" change the development or use of AI systems?
- What risks come from dehumanisation in militarised contexts, and how could they be reduced?
- Do you think a human rights approach could make companies more accountable? Why or why not?
Related articles
Light tool measures activity inside living brain cells
Researchers developed a bioluminescent calcium sensor called CaBLAM to record activity inside living brain cells without external light. The tool works in mice and zebrafish and enables long recordings that avoid damage from bright light.
New acid-free way to recycle lithium-ion batteries
Researchers at Rice University developed a two-step FJH-ClO process that separates lithium and other metals from spent batteries. The lab-scale method recovers valuable materials with less energy, fewer chemicals and less wastewater.
Targeted climate finance brings results in Benin, Ethiopia and Namibia
A February report by Global Health Strategies and the African Union Commission finds that targeted climate adaptation finance delivered measurable benefits in Benin, Ethiopia and Namibia, improving flood protection, roads, market access and local decision-making.