Artificial intelligence tools are spreading quickly, and in April ChatGPT reached a billion weekly active users. At the same time, researchers and journalists have documented harms from biased AI, such as unequal medical treatment and hiring tools that discriminate against female and Black candidates. New research from the University of Texas at Austin, led by John‑Patrick Akinyemi and Hüseyin Tanriverdi (a McCombs PhD candidate in IROM), examined 363 algorithms found in the AI Algorithmic and Automation Incidents and Controversies repository.
The team compared each problematic algorithm with a similar algorithm that had not been called out. They examined both the algorithms and the organizations that created and used them. The study highlights three related factors that raise the risk of unfair outcomes: lack of a clear ground truth, simplification of real‑world complexity, and limited stakeholder involvement.
The researchers argue that reducing bias requires more than higher accuracy. Developers should open black boxes, account for complex real conditions, include diverse input, and clarify ground truths. The research appears in MIS Quarterly and is reported by UT Austin.
Difficult words
- algorithm — A set of steps used to solve problems.algorithms
- biased — Showing unfair preference or against some people.
- discriminate — Treat people unfairly because of a group.
- ground truth — A real fact or correct answer in data.ground truths
- stakeholder — A person or group with interest in something.
- repository — A place where information or data is stored.
- black box — A system that hides how it works.black boxes
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Do you think AI tools for hiring or medicine can be unfair? Why or why not?
- How could developers include more diverse people when they build AI systems?
- Would you trust an AI tool that is a 'black box'? Explain your opinion briefly.
Related articles
New training method helps models do long multiplication
Researchers studied why modern language models fail at long multiplication and compared standard fine-tuning with an Implicit Chain of Thought (ICoT) method. ICoT models learned to store intermediate results and reached perfect accuracy.