On the scaling relationship between cloze probabilities and language model next-token prediction
Cassandra L. Jacobs, Morgan Grobol · Feb 19, 2026 · Citations: 0
How to use this page
Coverage: StaleUse this page to decide whether the paper is strong enough to influence an eval design. If the signals below are thin, treat it as background context and compare it against the stronger hub pages before making protocol choices.
Paper metadata checked
Feb 19, 2026, 9:29 PM
StaleProtocol signals checked
Feb 19, 2026, 9:29 PM
StaleSignal strength
Low
Model confidence 0.15
Abstract
Recent work has shown that larger language models have better predictive power for eye movement and reading time data. While even the best models under-allocate probability mass to human responses, larger models assign higher-quality estimates of next tokens and their likelihood of production in cloze data because they are less sensitive to lexical co-occurrence statistics while being better aligned semantically to human cloze responses. The results provide support for the claim that the greater memorization capacity of larger models helps them guess more semantically appropriate words, but makes them less sensitive to low-level information that is relevant for word recognition.