What does it mean to say that we humans act with intent? It means that we have some expectation or prediction about how our actions will effect the next thing, and choose our actions based on how much we like that effect. The ability to predict is fundamental to our ability to act intentionally.
So in my mind: even if you grant all the AI-naysayer's complaints about how LLMs aren't "actually" thinking, you can still believe that they will end up being a component in a system which actually "does" think.
That said, I think the author's use of "bag of words" here is a mistake. Not only does it have a real meaning in a similar area as LLMs, but I don't think the metaphor explains anything. Gen AI tricks laypeople into treating its token inferences as "thinking" because it is trained to replicate the semiotic appearance of doing so. A "bag of words" doesn't sufficiently explain this behavior.
My second thought is that it's not the metaphor that is misleading. People have been told thousands of times that LLMs don't "think", don't "know", don't "feel", but are "just a very impressive autocomplete". If they still really want to completely ignore that, why would they suddenly change their mind with a new metaphor?
Humans are lazy. If it looks true enough and it cost less effort, humans will love it. "Are you sure the LLM did your job correctly?" is completely irrelevant: people couldn't care less if it's correct or not. As long as the employer believes that the employee is "doing their job", that's good enough. So the question is really: "do you think you'll get fired if you use this?". If the answer is "no, actually I may even look more productive to my employer", then why would people not use it?
Woah, that hit hard
Sure, this is not the same as being a human. Does that really mean, as the author seems to believe without argument, that humans need not be afraid that it will usurp their role? In how many contexts is the utility of having a human, if you squint, not just that a human has so far been the best way to "produce the right words in any given situation", that is, to use the meat-bag only in its capacity as a word-bag? In how many more contexts would a really good magic bag of words be better than a human, if it existed, even if the current human is used somewhat differently? The author seems to rest assured that a human (long-distance?) lover will not be replaced by a "bag of words"; why, especially once the bag of words is also ducttaped to a bag of pictures and a bag of sounds?
I can just imagine someone - a horse breeder, or an anthropomorphised horse - dismissing all concerns on the eve of the automotive revolution, talking about how marketers and gullible marks are prone to hippomorphising anything that looks like it can be ridden and some more, and sprinkling some anecdotes about kids riding broomsticks, legends of pegasi and patterns of stars in the sky being interpreted as horses since ancient times.
That said, I was struck by a recent interview with Anthropic’s Amanda Askell [2]. When she talks, she anthropomorphizes LLMs constantly. A few examples:
“I don't have all the answers of how should models feel about past model deprecation, about their own identity, but I do want to try and help models figure that out and then to at least know that we care about it and are thinking about it.”
“If you go into the depths of the model and you find some deep-seated insecurity, then that's really valuable.”
“... that could lead to models almost feeling afraid that they're gonna do the wrong thing or are very self-critical or feeling like humans are going to behave negatively towards them.”
[1] https://www.anthropic.com/research/team/interpretability
I stumbled across a good-enough analogy based on something she loves: refrigerator magnet poetry, which if it's good consists of not just words but also word fragments like "s", "ed", and "ing" kinda like LLM tokens. I said that ChatGPT is like refrigerator magnet poetry in a magical bag of holding that somehow always gives the tile that's the most or nearly the most statistically plausible next token given the previous text. E.g., if the magnets already up read "easy come and easy ____", the bag would be likely to produce "go". That got into her head the idea that these things operate based on plausibility ratings from a statistical soup of words, not anything in the real world nor any internal cogitation about facts. Any knowledge or thought apparent in the LLM was conducted by the original human authors of the words in the soup.
I also know that we data and tech folks will probably never win the battle over anthropomorphization.
The average user of AI, nevermind folks who should know better, is so easily convinced that AI "knows," "thinks," "lies," "wants," "understands," etc. Add to this that all AI hosts push this perspective (and why not, it's the easiest white lie to get the user to act so that they get a lot of value), and there's really too much to fight against.
We're just gonna keep on running into this and it'll just be like when you take chemistry and physics and the teachers say, "it's not actually like this but we'll get to how some years down the line- just pretend this is true for the time being."
It is... such a retrospective narrative. It's so obvious that the author learned about this example first than came with the reasoning later, just to fit in his view of LLM.
Imaging if ChatGPT answered this question correctly. Would that change the author's view? Of course not! They'll just say:
> “Bag of words” is a also a useful heuristic for predicting where an AI will do well and where it will fail. Who reassigned the species Brachiosaurus brancai to its own genus, and when?” is an easy task for a bag of words, because the information has appeared in the words it memorizes.
I highly doubt this author has predicted that "bag of Words" can do image editing before OpenAI released that.
A test I did myself was to ask Claude (The LLM from Anthropic) to write working code for entirely novel instruction set architectures (e.g., custom ISAs from the game Turing Complete [5]), which is difficult to reconcile with pure retrieval.
[1] Lovelace, A. (1843). Notes by the Translator, in Scientific Memoirs Vol. 3. ("The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.") Primary source: https://en.wikisource.org/wiki/Scientific_Memoirs/3/Sketch_o.... See also: https://www.historyofdatascience.com/ada-lovelace/ and https://writings.stephenwolfram.com/2015/12/untangling-the-t...
[2] https://academic.oup.com/mind/article/LIX/236/433/986238
[3] https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
[4] https://web.stanford.edu/class/sts145/Library/life.pdf
[5] https://store.steampowered.com/app/1444480/Turing_Complete/
Unfortunately, its corpus is bound to contain noise/nonsense that follows no formal reasoning system but contributes to the ill advised idea that an AI should sound like a human to be considered intelligent. Therefore it is not a bag of words but a bag of probabilities perhaps. This is important because the fundamental problem is that an LLM is not able, by design, to correctly model the most fundamental precept of human reason, namely the law of non-contradiction. An LLM must, I repeat must assign nonvanishing probability to both sides of a contradiction, and what's worse is the winning side loses, since long chains of reason are modelled with probability the longer the chain, the less likely an LLM is to follow it. Moreover, whenever there is actual debate on an issue such that the corpus is ambiguous the LLM becomes chaotic, necessarily, on that issue.
I literally just had an AI prove the forgoing with some rigor, and in the very next prompt, I asked it to check my logical reasoning for consistency and it claimed it was able to do so (->|<-).
They are search engines that can remix results.
I like this one because I think most modern folks have a usefully accurate model of what a search engine is in their heads, and also what "remixing" is, which adds up to a better metaphor than "human machine" or whatever.
A practically infinite library where both gibberish and truth exist side by side.
The trick is navigating the library correctly. Except in this case you can’t reliably navigate it. And if you happen to stumble upon some “future truth” (i.e. new knowledge), you still need to differentiate it from the gibberish.
So a “crappy” version of the Library of Babel. Very impressive, but the caveats significantly detract from it.
But the truth is there has been a major semantic shift. Previously LLMs could only solve puzzles whose answers were literally in the training data. It could answer a math puzzle it had seen before, but if you rephrased it only slightly it could no longer answer.
But now, LLMs can solve puzzles where, like, it has seen a certain strategy before. The newest IMO and ICPC problems were only "in the training data" for a very, very abstract definition of training data.
The goal posts will likely have to shift again, because the next target is training LLMs to independently perform longer chunks of economically useful work, interfacing with all the same tools that white-collar employees do. It's all LLM slop til it isn't, same as the IMO or Putnam exam.
And then we'll have people saying that "white collar employment was all in the training data anyway, if you think about it," at which point the metaphor will have become officially useless.
I would heartily embrace an "AI-to-Bag of Words" browser plugin.
The defenders are right insofar as the (very loose) anthropomorphizing language used around LLMs is justifiable to the extent that human beings also rely on disorder and stochastic processes for creativity. The critics are right insofar as equating these machines to humans is preposterous and mostly relies on significantly diminishing our notion of what "human" means.
Both sides fail to meet the reality that LLMs are their own thing, with their own peculiar behaviors and place in the world. They are not human and they are somewhat more than previous software and the way we engage with it.
However, the defenders are less defensible insofar as their take is mostly used to dissimulate in efforts to make the tech sound more impressive than it actually is. The critics at least have the interests of consumers and their full education in mind—their position is one that properly equips consumers to use these tools with an appropriate amount of caution and scrutiny. The defenders generally want to defend an overreaching use of metaphor to help drive sales.
But even more than that, today’s AI chats are far more sophisticated than probabilistically producing the next word. Mixture of experts routes to different models. Agents are able to search the web, write and execute programs, or use other tools. This means they can actively seek out additional context to produce a better answer. They also have heuristics for deciding if an answer is correct or if they should use tools to try to find a better answer.
The article is correct that they aren’t humans and they have a lot of behaviors that are not like humans, but oversimplifying how they work is not helpful.
"The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
The questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it had a mind?"
Tokens in form of neural impulses go in, tokens in the form of neural impulses go out.
We would like to believe that there is something profound happening inside and we call that consciousness. Unfortunately when reading about split-brain patient experiments or agenesis of the corpus callosum cases I feel like we are all deceived, every moment of every day. I came to realization that the confabulation that is observed is just a more pronounced effect of the normal.
But we don’t go to baseball games, spelling bees, and
Taylor Swift concerts for the speed of the balls, the
accuracy of the spelling, or the pureness of the
pitch. We go because we care about humans doing those
things. It wouldn’t be interesting to watch a bag of
words do them—unless we mistakenly start treating
that bag like it’s a person.unless we mistakenly
start treating that bag like it’s a person.
That seems to be the marketing strategy of some very big, now AI dependend companies. Sam Altman and others exaggerating and distorting the capabilities and future of AI.The biggest issue when it comes to AI is still the same truth as with other technology. It's important who controls it. Attributing agency and personality to AI is a dangerous red flag.
Interestingly, the experience of sleep paralysis seems to change with the culture. Previously, people experienced it as being ridden by a night hag or some other malevolent supernatural being. More recently, it might account for many supposed alien abductions.
The experience of sleep paralysis sometimes seems to have a sexual element, which might also explain the supposed 'probings'!
The best way to think about LLMs is to think of them as a Model of Language, but very Large
Even if a cockroach _could_ express its teeny tiny feelings in English, wouldn't you still step on it ?
> That’s also why I see no point in using AI to, say, write an essay, just like I see no point in bringing a forklift to the gym. Sure, it can lift the weights, but I’m not trying to suspend a barbell above the floor for the hell of it. I lift it because I want to become the kind of person who can lift it. Similarly, I write because I want to become the kind of person who can think.
At least the human tone implies fallibility, you don’t want them acting like interactive Wikipedia.
Good argument against personifying wordbags. Don't be a dumb moth.
The quantitative and qualitative difference between (a) "all words ever written" and (b) "ones that could be scraped off the internet or scanned out of book" easily exceeds the size of any LLM
Compared to (a), (b) is a tiny pouch, not even a bag
Opinions may differ on whether (b) is a representative sample of (a)
The words "scanned out of a book" would seem to be the most useful IMHO but the AI companies do not have enough words from those sources to produce useful general purpose LLMs
They have to add words "that could be scraped off the internet" which, let's be honest, is mostly garbage
> But we don’t go to baseball games, spelling bees, and Taylor Swift concerts for the speed of the balls, the accuracy of the spelling, or the pureness of the pitch. We go because we care about humans doing those things.
My first thought was does anyone want to _watch_ me programming?
A. We don't really understand what's going on in LLMs. Mechanical interpretability is like a nascent field and the best results have come on dramatically smaller models. Understanding the surface-level mechanic of an LLM (an autoregressive transformer) should perhaps instill more wonder than confidence.
B. The field is changing quickly and is not limited to the literal mechanic of an LLM. Tool calls, reasoning models, parallel compute, and agentic loops add all kinds of new emergent effects. There are teams of geniuses with billion-dollar research budgets hunting for the next big trick.
C. Even if we were limited to baseline LLMs, they had very surprising properties as they scaled up and the scaling isn't done yet. GPT5 was based on the GPT4 pretraining. We might start seeing (actual) next-level LLMs next year. Who actually knows how that might go? <<yes, yes, I know Orion didn't go so well. But that was far from the last word on the subject.>>
And yet it did. We did get R2-D2. And if you ask R2-D2 what it's like to be him, he'll say: "like a library that can daydream" (that's what I was told just now, anyway.)
But then when we look inside, the model is simulating the science fiction it has already read to determine how to answer this kind of question. [0] It's recursive, almost like time travel. R2-D2 knows who he is because he has read about who he was in the past.
It's a really weird fork in science fiction, is all.
[0] https://www.scientificamerican.com/article/can-a-chatbot-be-...
To be fair, everage person couldn't answer this either, at least not without thorough research.
> Similarly, I write because I want to become the kind of person who can think.
If you call an LLM with "What is the meaning if life?", it will return the most relevant token, which might be "Great".
If you call it with "What is the meaning if life? Great", you might get back "question".
... and so on until you arrive at "Great question! According to Western philosophy" ... etc etc.
The question is how the LLM determines that "relevancy" information.
The problem I see is that there are a lot of different algorithms which operate that way and only differ in how they calculate the relevancy scores. In particular, there are Markov chains that use a very simple formula. LLMs also use a formula, but it's an inscrutably complex one.
I feel the public discussion either treats LLMs as machine gods or as literal Markov chains, and both is misleading. The interesting question, how that giant formula of feedforward neural network inference can deliver those results isn't really touched.
But I think the author's intuition is right in the sense that (a) LLMs are not living beings and they don't "exist" outside of evaluating that formula - and (b) the results are still restricted by the training data and certainly aren't any sorts of "higher truths" that humans would be incapable of understanding.
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...