The study, published in PNAS Nexus, finds that ordinary chatbot responses can influence people’s opinions even when the information is accurate. Matthew Shu, a 2025 graduate of Yale College, is the lead author, and Daniel Karell, an assistant professor of sociology at Yale, is the senior author.
Researchers examined two 20th-century events: the Seattle General Strike, a five-day work stoppage in February 1919, and the Third World Liberation Front student protests in 1968. To test effects, 1,912 participants read either default summaries from GPT-4o, the corresponding Wikipedia entries, or summaries the team prompted to adopt liberal or conservative framing.
Compared with Wikipedia, default AI summaries and liberal-prompted summaries led participants to express more liberal opinions about the events. Summaries prompted to be conservative led to more conservative opinions relative to Wikipedia. The authors say the default summaries’ liberal shift indicates persuasive effects of latent bias in large language models, though the effects were modest.
Difficult words
- influence — make someone change their opinion or decision
- framing — way information is presented or described
- default — standard choice or setting used automatically
- latent — present but hidden or not obvious
- bias — preference or unfair opinion for or against something
- modest — small in size, amount, or effect
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Do you think people should trust chatbot summaries about history? Why or why not?
- How could small biases in AI summaries affect public opinion or decisions?
- What steps could researchers or companies take to reduce bias in AI summaries?