And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
You won't read, except the output of your LLM.You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
https://www.baen.com/Chapters/9781618249203/9781618249203___...
Damn, good read.
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
dx 2
-- = x
dt
which has the solution 1
x = -----
C-t
and is interesting in relation to the classic exponential growth equation dx
-- = x
dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
dx
-- = (1-x) x
dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.
It said that the article claims that is not necessarily that AI is getting smarter but that people might be getting too stupid to understand what are they getting into.
Can confirm.
Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
Classic LLM lingo in the end there.
I’m going to lose it the day this becomes vernacular.
The only metric going infinite is the one that measures hype
Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.
As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.
LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.
Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.
Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.
Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?
Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?
[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
^ That's your problem right there.
Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.
The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.
I feel like I need to start more sprint stand-ups with this quote...
> The labor market isn't adjusting. It's snapping.
> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
In other words, there may be a geopolitical crisis in the works, similar to how the Dot Bomb, Bush v. Gore, 9/11, etc popped the Internet Bubble and shifted investment funds towards endless war, McMansions and SUVs to appease the illuminati. Someone might sabotage the birth of AGI like the religious zealot in Contact. Global climate change might drain public and private coffers as coastal areas become uninhabitable, coinciding with the death of the last coral reefs and collapse of fisheries, leading to a mass exodus and WWIII. We just don't know.
My feeling is that the future plays out differently than any prediction, so something will happen that negates the concept of the Singularity. Maybe we'll merge with AGI and time will no longer exist (oops that's the definition). Maybe we'll meet aliens (same thing). Or maybe the k-shaped economy will lead to most people surviving as rebels while empire metastasizes, so we take droids for granted but live a subsistence feudal lifestyle. That anticlimactic conclusion is probably the safest bet, given what we know of history and trying to extrapolate from this point along the journey.
- Arthur Dent, H2G2
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
Sure is a lot of words though :)
The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.
1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.
2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.
3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
The answer to the meaning of life is 42, by the way :)
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
Don't worry about the future
Or worry, but know that worrying
Is as effective as trying to solve an algebra equation by chewing Bubble gum
The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday
- Everybody's free (to wear sunscreen)
Baz Luhrmann
(or maybe Mary Schmich)4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
So, "Falling of the night" ?
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
Arrested Development?
Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
Those short sentences are the most obvious clue. It’s too well written to be human.
Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...
Scaling LLMs will not lead to AGI.
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics