If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.
If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.
[1](https://tailstrike.com/database/01-june-2009-air-france-447/)
> If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally increasing for 40 years, the top earner pre-tax income shares are nearing their highs from the early 20th century, and Republican opposition to progressive tax policy remains strong.
I think we are in general a highly naive, gullible class of people: we were conditioned, programmed and put into environments where being this was the norm and rewarded. The leaders and those extracting resources, who we gullibly allow to trample over our dignity and our rights, take advantage of this and reinforce it through lobby and influence of the mainstream culture and media campaigns around us. Further, if social media becomes a threat to their statuses, they have been shown to employ their influence there too through censorship and more; we therefore, may be best to learn how to not to be gullible and grow some balls.
[0] https://www.bma.org.uk/news-and-opinion/medical-degree-appre...
My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.
I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...
I would encourage folks to look at the following industries: nuclear safety, commercial aviation, remote surgery. These industries have dealt with the issues of automation for much longer than we have as programmers.
In the research I've done, these industries went through a similar journey in the 20th century as we are now: once something becomes automated enough, the old way simply won't work. You have to evolve new frameworks and procedures to deal with it.
So in the case of aviation they developed CRM and SRM - how to manage the airplane as a crew and how to manage it as a solo operator. Remember that modern airplanes are highly automated!! The human pilot is not typically hands-on-wheel for most of the flight.
In the case of surgeons, they found that de-skilling without regular practice can occur in as little as four weeks! So to combat that, some surgeons are now required to practice in simulated environment to keep their skills sharp.
My feeling is that 'aphyr is right in the short-to-medium term. Current market forces and US regulatory posture (or lack thereof) makes it so that there are less rules and less enforcement. IMHO the results are depressingly predictable but the train has left the station with enough momentum that there's no stopping it. If we survive long enough to make it past the medium-term things will change.
This sort of prompting is only necessary now because LLMs are janky and new. I might have written this in 2025, but now LLMs are capable of saying "wait, that approach clearly isn't working, let's try something else," running the code again, and revising their results.
There's still a little jankiness but I have confidence LLMs will just get better and better at metacognitive tasks.
UPDATE: At this very moment, I'm using a coding agent at work and reading its output. It's saying things like:
> Ah! The command in README.md has specific flags! I ran: <internal command>. Without these flags! I missed that. I should have checked README.md again or remembered it better. The user just viewed it, maybe to remind me or themselves. But let's first see what the background task reported. Maybe it failed because I missed the flags, or passed because the user got access and defaults worked.
AI is already developing better metacognition.
I recently discovered an example of this phenomenon in a completely unrelated area: navigation. About a week ago, I realized that I couldn't remember the exact turns to reach a certain place I started driving to recently, even after having driven there about 3-4 times over a period of a month. Each time I had used Google Maps. When I used to drive pre-Google-Maps, I would typically develop a good spatial model of a route on my third drive. This skill seems to have atrophied now. Even when I explicitly decide to drive without Google Maps, and make mental notes of the turns, my retention of new routes is now much weaker than it used to be. Thankfully, routes I retained before becoming Google Maps dependent, are still there.
For example I'm now relying on Soteria, the greek goddess of safety, salvation and preservation from harm to act as my database administrator.
But, thanks to all the companies working on open-weight models, I'm starting to think this might no longer happen. Currently open-weights models are said to be just months behind the top players (and I think we should really try to do what we can to keep it that way).
I'm wondering what the predictions would be in the case where AI becomes very powerfull, but also models are generally available.
Two possibilities come to mind, the first one where all the money no longer spent on employment would go towards hardware. New hardware manufacturers or innovators could jump in and create a bit more employment, but eventually it would probably all progress in one direction, which is the only finite resource in the chain, the materials/minerals needed for the hardware. Those materials might become the new "petrol". It's possible that eventually we would have build enough chips to power all the AI we need without needing more extraction, but I wouldn't underestimate our ability to waste resources when they feel aboundant.
In the second possibility, alongside a very powerful open-weight LLM, there could be big performance advancements, which would make the hardware no longer the bottleneck. But I'm struggling to imagine this scenario, maybe we would all be better off? Maybe we would all just be deppressed because most people won't feel "usefull" to society or their peers anymore?
Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.
Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.
Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.
Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.
And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.
The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.
It feels like hexing the technical interview come to real life ;)
Only if you let it. You can own the means of production. I self host my daily driver LLMs in hardware in my garage.
Never given money to an LLM provider and never will. I only do work with tools I own.
I wonder if he self censored here regarding potential futures here as concentration of stupendous wealth in generational hands and obscene wealth disparity coupled with machines that can do what "bodies" can naturally points to depopulation as a goal for the elite and their (future) spawn that are not on the choping blocks.
It’s only fair that they would receive the same amount. But then how can the former category continue to fulfill their obligations?
Humans are also distinctly bad at noticing certain kinds of bugs in software. Think off-by-one errors, deadlocks, or any sort of bug you've stared at for days and not noticed the one missing or extra semicolon. But LLMs can generate a tsunami of subtly wrong code in the time a reviewer will notice one typo and miss all the rest.
We have to remember that the results of our prompting is a synthesis, formed on the mass psychosis of a humanity which is simultaneously capable of being completely and utterly heinous to each other, and gloriously noble and kind as well - with nought but a stray new word and a thousand old forgotten to keep us all together or not.
In any case, all culture is a lie, which only persists in the re-telling. The past is a lie, too, somehow, someday, forgotten the day nobody remembers it. Hope you make some tunes into the winds and they echo on forever. And by you, I mean, not an AI/ML-based entity, but rather, the source of all lies, the human soul itself.
<h1>Unavailable Due to the UK Online Safety Act</h1>>> You would fire these people, right?
Okay, now imagine a different colleague. One who writes a solid first draft of any boilerplate task in seconds, freeing you to focus on architecture instead of plumbing. A dev who never gets defensive when you rewrite their code, never pushes back out of ego, and never says "that's not my job." A pair programmer who's available at 3 AM on a Sunday when prod is down and you need to think out loud. One who remembers every API you've forgotten, every flag in every CLI tool, every syntax quirk in a language you use twice a year, or even every day.
You'd want that person on your team, right? In fact, you would probably give them a promotion.
Here's the thing: the original argument describes real failure modes, but then commits a subtle sleight of hand. It personifies the tool as a colleague with agency, then condemns it for lacking the judgment that agency implies. But you don't fire a table saw because it doesn't know when to stop cutting, right? You learn where to put your hands.
Every flaw in that list is, at the end of the day, a flaw in the workflow, not the tool. Code with security hazards? That's what reviews are for. And AI-generated code gets reviewed at far higher rates than the human code people have been quietly rubber-stamping for decades. Commits failing tests? Then your CI pipeline should be the gate, not a promise. Deleted your home directory? Then it shouldn't have had the permissions to do that in the first place. In fact, the whole "deleted my home directory" shit is the same thing as "our intern deleted the prod database". We all know that the response to the latter is "why did they have permission to prod in the first place??" AI is the same way, but for some god damn reason people apply totally different standards to it.
(And before anyone brings pitch forks out, this is what they wrote in a previous article:
> “Cool it already with the semicolons, Kyle.” No. I cut my teeth on Samuel Johnson and you can pry the chandelierious intricacy of nested lists from my phthisic, mouldering hands. I have a professional editor, and she is not here right now, and I am taking this opportunity to revel in unhinged grammatical squalor.
My life was made poorer for knowing that semicolons are apparently a sin, but richer for the rebellion.
There it is, an actual em-dash in the wild, written by hand.
Read up on Cluster B personality disorders (borderline, narcissism, sociopaths/psychopaths) and you see the similarities. Love bombing, gaslighting, a shared fantasy, etc. It's very interesting and scary at the same time.
Welcome to web development buddy
> how ML might change the labor market
Human labor is expensive. If LLMs do make things cheaper and faster to produce, you don't need that many humans anymore. Again, assuming the improvement is real, there absolutely will be shrinkage for existing businesses in headcount. What remains to be seen is how much cheaper machines make work. 1.5x? 2x? 10x? 100x?
> unlike sewing machines or combine harvesters, ML systems seem primed to displace labor across a broad swath of industries [...] The question is what happens when [..] all lose their jobs in the span of a decade
It's more like hand tools -> power tools; a concept applied to many things. Everyone will adopt them, and you'll need fewer workers who'll work faster with less skill. You get a gradual labor force shrinkage, but also an increase in efficiency, so it's not like a hole is opening up in your economy. A strong economy can create new jobs, from either private or public sources.
> ML allows companies to shift spending away from people and into service contracts with companies like Microsoft
The price of hardware, as it always has been, is a downward trend, while the efficiency of open weights is going up (it will plateau eventually but it's still going up). We already spend $20,000 on servers, whether it's buying them once on-prem, or renting them out in AWS. ML is just another piece of software running on another piece of hardware
> if companies are successful in replacing large numbers of people with ML systems, the effect will be to consolidate both money and power in the hands of capital
That ship left port like 30 years ago dude. Laborers have no power in the 21st century.