A study in PNAS Nexus shows that ordinary AI chatbot summaries can shift social and political opinions through subtle framing. The authors argue these effects arise both from latent biases in model training and from the specific prompts used to produce responses. As Daniel Karell notes, "We show that querying an AI chatbot to obtain historical facts can influence people’s opinions even when the information provided is accurate and nobody has prompted the tool to try to persuade you of anything."
The research tested two 20th-century cases: the Seattle General Strike, a five-day stoppage in February 1919, and the Third World Liberation Front student protests in 1968. In an experiment with 1,912 participants, people read either default GPT-4o summaries, the corresponding Wikipedia entries, or summaries the team prompted to adopt liberal or conservative framing.
Results showed that default AI summaries and liberal prompts pushed responses in a more liberal direction compared with Wikipedia, while conservative prompts produced more conservative reactions relative to Wikipedia. The liberal shift in default summaries points to latent bias in large language models; conservative effects appear to arise mainly from prompting. The researchers warn that, unlike Wikipedia’s transparent editing, chatbot development is opaque and can allow companies to shape public opinion. Additional coauthors are from Yale and Rutgers University.
Difficult words
- latent — present but not obvious or visible
- bias — an unfair tendency to prefer one sidebiases
- prompt — a brief instruction given to a computer modelprompts, prompted, prompting
- framing — how information is presented or described
- opaque — not clear or open to public view
- influence — to affect someone’s opinions or decisions
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could opacity in chatbot development influence public opinion and trust? Give examples.
- What measures could platforms or researchers take to reduce biased framing in AI summaries?
- Have you ever changed an opinion after reading a short online summary? Describe what happened and why.
Related articles
Alternative splicing linked to mammal lifespan
A study in Nature Communications compared alternative splicing across 26 mammal species (lifespans 2.2–37 years) and found splicing patterns better predict maximum lifespan than gene activity; the brain shows many lifespan-linked events controlled by RNA-binding proteins.
Young men in South Korea move to the political right
Surveys after the June 2025 snap presidential election show many young men in South Korea have shifted to the political right, creating a large gender gap on feminism, redistribution and immigration, even as most still support democratic rules.
Researchers Call for Clear Rules on Gene-Edited Crops in Mexico
Mexican researchers want rules that distinguish gene-edited crops from GMOs. They launched a petition asking the government for evidence-based regulation while warning a March decree banning genetically modified maize could also affect gene editing.