I have spent a lot of time experimenting with Chain of Thought professionally and I have yet to see any evidence to suggest that what's happening with CoT is any more (or less) than this. If you let the model run a bit longer it enters a region close to the typical set and when it's ready to answer you have a high probability of getting a good answer.
There's absolutely no "reasoning" going on here, except that some times sampling from the typical set near the region of your answer is going to look very similar to how human reason before coming up with an answer.
Well, no, it proves that Messi can reason efficiently without an inner speech.
Humans are the only known organism to do System 2 (which doesn't mean we're the only ones that do it, just that we don't know if whales do it), but System 2 is what the author is talking about when they refer to Chains of Thought.
System 1 is what they're referring to when they talk about Messi reacting to an unusual situation on the field.
Related anecdote: I tested myself for ADHD by taking amphetamines. I normally think by intuitive leaps from point to point, without doing the intermediate steps consciously. I found that during this experience my System 2 thinking was fast enough to follow and I actually experienced proper chains of thought. Or I was whizzing my tits off and hallucinated the whole thing. Not sure yet. I should repeat the experiment.
Pedantic maybe -- but does this need two plurals?
If true, this is somehow meant to be taken as a rule/law that language and thoughts are fundamentally different things.
No wonder why he got sacked, lol.