A team at the University of Washington tested whether AI can learn cultural values by observing human behavior. The study appears in PLOS One and builds on earlier UW work showing that 19-month-old children raised in Latino and Asian households were more prone to altruism.
Researchers recruited adults who identified as white (190 people) and adults who identified as Latino (110 people). Each group’s data trained a separate agent using inverse reinforcement learning (IRL). Unlike standard reinforcement learning, IRL lets an agent watch behavior and infer the underlying goals and rewards.
In a modified version of the game Overcooked, players could give away onions to help another player who had to walk farther. People in the Latino group helped more, and the agent trained on Latino data behaved more altruistically in the game and in a separate donation decision. The authors say more research is needed with other cultural groups and real-world problems.
Difficult words
- cultural — connected with the habits and beliefs of groups
- value — idea about what is important or rightvalues
- altruism — willingness to help others without reward
- inverse reinforcement learning — method where an agent infers goals from actions
- agent — computer program that makes decisions in tasks
- recruit — to find and sign people for a study or jobrecruited
- donation — money or help given to a person or cause
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Do you think AI should learn cultural values by observing people? Why or why not?
- What other everyday situations could researchers use to test whether AI learns cultural values?
- How might an AI that learns to be more altruistic affect donations or teamwork in real life?