LingVo.club
📖+40 XP
🎧+25 XP
+45 XP
Brain predictions use phrases, not just next words — Level B2 — a close up of a piece of luggage with text on it

Brain predictions use phrases, not just next wordsCEFR B2

21 Apr 2026

Level B2 – Upper-intermediate
4 min
214 words

Scientists examined whether human anticipatory language processing works the same way as next-word prediction in large language models (LLMs). The new study, published in Nature Neuroscience, reports that the brain predicts upcoming language using larger grammatical units, or constituents, rather than relying solely on the single most likely next word. "While LLMs are trained and optimized to predict the next word, the human brain makes predictions by grammatically grouping words into phrases," says coauthor David Poeppel.

The researchers ran several experiments with Mandarin Chinese speakers and recorded neural responses with magnetoencephalography (MEG). They complemented those recordings with behavioral Cloze tests, where readers supply missing words, and they reanalyzed additional brain data from patients exposed to English to test language generality.

  • They used LLMs to quantify predictability via entropy and surprisal.
  • High entropy means many possible next words; high surprisal means a word is unexpected.
  • They compared model-based predictions to time-locked brain responses for the same sentences.
  • Instead of a uniform match, brain correlations depended on a word’s position in grammatical phrases.

These results indicate that human prediction is modulated by grammatically organized chunks, a sensitivity that LLM next-word probabilities do not capture. The findings raise questions about how computational models should represent linguistic structure to better reflect human language processing.

Difficult words

  • anticipatorypredicting something before it happens
  • constituenta grammatical unit like a phrase
    constituents
  • magnetoencephalographya brain activity recording method
  • Cloze testtask where readers fill missing words
    Cloze tests
  • entropymeasure of how many choices exist
  • surprisalmeasure of how unexpected something is
  • modulatechange or influence the strength or level
    modulated

Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.

Discussion questions

  • What benefits might there be if computational models represented grammatical chunks like the human brain?
  • How does using both Mandarin and English data affect the study's claims about general language processing?
  • How could these findings influence the design of text-prediction features in real applications?

Related articles