How Generative AI Changes Disinformation CampaignsCEFR B2
14 Nov 2025
Adapted from Metamorphosis Foundation, Global Voices • CC BY 3.0
Photo by Hartono Creative Studio, Unsplash
An interview published on November 12, 2025 by Antidisinfo.net and republished under a content-sharing agreement with Global Voices and Metamorphosis Foundation examined how generative AI changes foreign interference. Laura Jasper of the Hague Centre for Strategic Studies (HCSS) described new strategic threats and what analysts must do differently as generative AI makes disinformation faster, wider and easier to tailor.
Jasper said attribution is now a question of probability rather than certainty because adversaries often employ proxies, false flags and commercial tools, including generative AI. She advised analysts to assign confidence levels (for example low, medium or high) and to publish the basis for those assessments. This practice preserves credibility and helps build shared understanding across organisations.
HCSS studies in Europe and the Indo-Pacific identify a common vulnerability: heavy reliance on commercial platforms paired with social trust fractures such as polarization and low institutional trust. Jasper argued that hostile actors exploit these conditions to amplify divisions. Measuring whether a foreign campaign succeeds requires observable changes in behaviour, not just opinion shifts. Analysts must define the behavioural end-state — for example reduced voter turnout or increased protest participation — and establish clear baselines and counterfactuals.
To link exposure to action, Jasper recommends combining quantitative sources (polling, mobility data, transaction or participation records) with qualitative insights like interviews and focus groups. True resilience appears when societies recover quickly, when intended behaviours do not materialise or when they rebound fast. Jasper cited the study Start with the End: Effect Measurement of Behavioural Influencing in Military Operations as a basis for this approach. Asked about legal “grey zones,” she refused to advise methods outside the law and instead urged broad, granular engagement across society, involving community builders, investigative journalists and local actors rather than only top-level government responses.
Difficult words
- generative AI — software that creates text or images
- disinformation — false information spread to mislead people
- attribution — deciding who is responsible for an action
- proxy — person or group acting on behalf of othersproxies
- false flag — deceptive act blamed on a different actorfalse flags
- polarization — division of society into opposing groups
- baseline — measure used for comparison over timebaselines
- counterfactual — what would have happened without an actioncounterfactuals
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- What are the benefits and possible risks of publishing the basis for confidence assessments, as Jasper recommends?
- Why does Jasper argue that analysts should measure behaviour rather than only opinion? Give examples of behavioural end-states mentioned in the text.
- How could involving community builders, investigative journalists and local actors improve a society's resilience to foreign interference? Give practical examples.
Related articles
Zenica School of Comics: Art and Education for Children
The Zenica School of Comics began during the 1992–95 war and has taught around 200 young artists. The school still runs, faces changes from tablets and AI, and the regional comics scene survives through festivals and cooperation.
Small pause to slow misinformation on social media
Researchers at the University of Copenhagen propose a small pause before sharing on platforms like X, Bluesky and Mastodon. A computer model shows that a short delay plus a brief learning step can reduce reshares and improve shared content quality.