LingVo.club
📖+30 XP
🎧+20 XP
+35 XP
Brain predictions use phrases, not just next words — Level B1 — a close up of a piece of luggage with text on it

Brain predictions use phrases, not just next wordsCEFR B1

21 Apr 2026

Level B1 – Intermediate
3 min
164 words

Research published in Nature Neuroscience asked whether brain predictions work like the next-word predictions used in phones and large language models (LLMs). Coauthor David Poeppel explains that while LLMs are trained to predict the next word, the human brain makes predictions by grammatically grouping words into phrases.

The study tested Mandarin Chinese speakers and recorded brain activity with magnetoencephalography (MEG). It also used behavioral Cloze tests, where people fill missing words, and the team examined additional brain data from patients exposed to English to check cross-language consistency.

To compare brain responses and model behavior, the researchers used LLMs to compute entropy (many possible continuations) and surprisal (how unexpected a word is). If the brain worked like an LLM, correlations between brain signals and model predictions would be uniformly high. Instead, brain responses varied with a word’s position inside grammatical structure, showing sensitivity to constituents. The authors conclude human prediction is balanced and modulated by grammatically organized chunks, not just next-word probability.

Difficult words

  • entropymeasure of unpredictability among many options
  • surprisalhow unexpected an event or item is
  • magnetoencephalographya brain activity recording method using magnetic fields
    magnetoencephalography (MEG)
  • cloze testtask where people fill missing words
    Cloze tests
  • constituenta group of words that form a unit
    constituents
  • modulateto change strength or level of something
    modulated

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • Do you think your brain predicts words more like an LLM or by grouping words into grammatical chunks? Why?
  • Why is it useful for researchers to test speakers of different languages, such as Mandarin and English, in this study?
  • How might the finding that prediction is modulated by grammatical chunks affect text prediction tools on phones?

Related articles

When AI Favors Profit over People — Level B1
21 Apr 2026

When AI Favors Profit over People

Hija Kamran warns that tech companies often prioritise business models over people. She argues AI trained on biased internet and public records can amplify harms and calls for a human rights approach and early scepticism.