A study in PNAS Nexus shows that ordinary AI chatbot summaries can shift social and political opinions through subtle framing. The authors argue these effects arise both from latent biases in model training and from the specific prompts used to produce responses. As Daniel Karell notes, "We show that querying an AI chatbot to obtain historical facts can influence people’s opinions even when the information provided is accurate and nobody has prompted the tool to try to persuade you of anything."
The research tested two 20th-century cases: the Seattle General Strike, a five-day stoppage in February 1919, and the Third World Liberation Front student protests in 1968. In an experiment with 1,912 participants, people read either default GPT-4o summaries, the corresponding Wikipedia entries, or summaries the team prompted to adopt liberal or conservative framing.
Results showed that default AI summaries and liberal prompts pushed responses in a more liberal direction compared with Wikipedia, while conservative prompts produced more conservative reactions relative to Wikipedia. The liberal shift in default summaries points to latent bias in large language models; conservative effects appear to arise mainly from prompting. The researchers warn that, unlike Wikipedia’s transparent editing, chatbot development is opaque and can allow companies to shape public opinion. Additional coauthors are from Yale and Rutgers University.
Difficult words
- latent — present but not obvious or visible
- bias — an unfair tendency to prefer one sidebiases
- prompt — a brief instruction given to a computer modelprompts, prompted, prompting
- framing — how information is presented or described
- opaque — not clear or open to public view
- influence — to affect someone’s opinions or decisions
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How could opacity in chatbot development influence public opinion and trust? Give examples.
- What measures could platforms or researchers take to reduce biased framing in AI summaries?
- Have you ever changed an opinion after reading a short online summary? Describe what happened and why.
Related articles
New training method helps models do long multiplication
Researchers studied why modern language models fail at long multiplication and compared standard fine-tuning with an Implicit Chain of Thought (ICoT) method. ICoT models learned to store intermediate results and reached perfect accuracy.
Zenica School of Comics: Art and Education for Children
The Zenica School of Comics began during the 1992–95 war and has taught around 200 young artists. The school still runs, faces changes from tablets and AI, and the regional comics scene survives through festivals and cooperation.
Glacial lakes and flood risk in the Hindu Kush‑Himalaya
The Hindu Kush‑Himalaya stores large freshwater in mountain glaciers. Warming has formed thousands of glacial lakes and raised the risk of sudden outburst floods; experts say better data sharing, observation and funding are needed but political and technical barriers remain.