Table 1 is even more odd, H-neurons predicts hallucination ~75% of the time but a similar % of random neurons predict hallucinations ~60% of the time, which doesn't seem like a huge difference to me.
No. Human beings have experiential, embodied, temporal knowledge of the world through our senses. That is why we can, say, empirically know something, which is vastly different than semantically or logically knowing something. Yes, human beings also have probabalistic ways of understanding the world and interacting with others. We have many other forms of knowledge as well and the LLM way of interpreting data is by no means the primary way in which we feel confident that something is true or false.
That said, I don't get up in arms about the term "hallucination", although I prefer the term confabulation per neuroscientist Anil Seth. Many clunky metaphors are now mainstream, and as long as the engineers and researchers who study these kinds of things are ok with that, that's the most important thing.
But what I think all these people who dismiss objections to the term as "arguing semantics" are missing is the fundamental point: LLMs have no intent, and they have no way of distinguishing what data is empirically true or not. This is why the framing, not just the semantics, of this piece is flawed. "Hallucinations" is a feature of LLMs that exists at the very conceptual level, not as a design flaw of current models. They have pattern recognition, which gets us very far in terms of knowing things, but people who only rely on such methods of knowing are most often referred to as conspiracy theorists.
[submitters: one reason for not editorializing titles is it makes the threads be about that!]