I wonder if this is similar to Chess and Go getting 'solved'. Hard problem spaces that only the biggest brains could tackle. Maybe it turns out creating highly performant, distributed systems with a plethora of unittests is a cakewalk for LLMs, while trying to make a 'simple web app' for a niche microscopy application is like trying to drive around San Francisco.
I just have to conclude 1 of 2 things:
1) I'm not good at prompting, even though I am one of the earliest AI in coding adopters I know, and have been consistent for years. So I find this hard to accept.
2) Other people are just less picky than I am, or they have a less thorough review culture that lets subpar code slide more often.
I'm not sure what else I can take from the situation. For context, I work on a 15 year old Java Spring + React (with some old pages still in Thymeleaf) web application. There are many sub-services, two separate databases,and this application needs to also 2-way interface with customer hardware. So, not a simple project, but still. I can't imagine it's way more complicated than most enterprise/legacy projects...
I feel like this is not the same for everyone. For some people, the "fire" is literally about "I control a computer", for others "I'm solving a problem for others", and yet for others "I made something that made others smile/cry/feel emotions" and so on.
I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer, and for them, I understand LLMs remove the fun part. For me, I initially got into programming because I wanted to ruin other people's websites, then I figured out I needed to know how to build websites first, then I found it more fun to create and share what I've done with others, and they tell me what they think of it. That's my "fire". But I've met so many people who doesn't care an iota about sharing what they built with others, it matters nothing to them.
I guess the conclusion is, not all programmers program for the same reason, for some of us, LLMs helps a lot, and makes things even more fun. For others, LLMs remove the core part of what makes programming fun for them. Hence we get this constant back and forth of "Can't believe others can work like this!" vs "I can't believe others aren't working like this!", but both sides seems to completely miss the other side.
I wanted to provide some more context that is not part of the blog post. Since somebody may believe I don't enjoy / love the act of writing code.
1. I care a lot about programming, I love creating something from scratch, line by line. But: at this point, I want to do programming in a way that makes me special, compared to machines. When the LLM hits a limit, and I write a function in a way it can't compete, that is good.
2. If I write a very small program that is like a small piece of poetry, this is good human expression. I'll keep doing this as well.
3. But, if I need to develop a feature, and I have a clear design idea, and I can do it in 2 hours instead of 2 weeks, how to justify to myself that, just for what I love, I will use a lot more time? That would be too much of ego-centric POV, I believe.
4. For me too this is painful, as a transition, but I need to adapt. Fortunately I also enjoyed a lot the design / ideas process, so I can focus on that. And write code myself when needed.
5. The reason why I wrote this piece is because I believe there are still a lot of people that are unprepared for the fact we are going to be kinda of obsolete in what defined us, as a profession: the ability to write code. A complicated ability requiring a number of skills at the same time, language skills, algorithms, problem decomposition. Since this is painful, and I believe we are headed in a certain direction, I want to tell the other folks in programming to accept reality. It will be easier, this way.
The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.
> How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge.
I don't see it as democratic or democratising. TBH the knowledge is stored in three giga companies that used sometimes almost non-lawful (if not lawful?) methods to gain it, scraping it off the gpl projects etc. And now they are selling it to us without giving the models away. The cost IS understandable because the horrendously expensive vector cards do not come for free, but there is only one country the knowledge is gathered in so this might as well fade away one day when an orange present says so (gimme all the monies or else..)The concern mostly comes from the business side… that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on. It’s a nice set of useful features but without products with sufficient revenue flowing in to pay for it all.
That paints a picture of the tech sticking around but a general implosion of the startups and business models betting on making all this work.
The later isn’t really “anti-AI hype” but more folks just calling out the reality that there’s not a lot of evidence and data to support the amount of money invested and committed. And if you’ve been around the tech and business scene a while you’ve seen that movie before and know what comes next.
In 5 years time I expect to be using AI more than I do now. I also expect most of the AI companies and startups won’t exist anymore.
Why we don't have to be anti-AI? Why in his opinion is just "HYPE"? I didn't find any answer in his post. He doesn't analyse the cons of AI and explain why some people might be anti-AI. He skipped the hard part and wrote a mild article that re-publish the narrative that is already getting spread on every social media.
Edit for clarification: I don't consider anti-AI the people that think LLMs don't work, they are wrong. I consider anti-AI people that are worried how this technology will impact society in so many ways that are hard to predict, including the future of software engineering.
No. I agree with the author, but it's hyperbolic of him to phrase it like this. If you have solid domain knowledge, you'll steer the model with detailed specs. It will carry those out competently and multiply your productivity. However, the quality of the output still reflects your state of knowledge. It just provides leverage. Given the best tractors, a good farmer will have much better yields than a shit one. Without good direction, even Opus 4.5 tends to create massive code repetion. Easy to avoid if you know what you are doing, albeit in a refactor pass.
Being differently trained and using different tools than almost everyone else I know in engineering my entire career has allowed me to find solutions and vulnerabilities others have missed time and time again. I exclusively use open source software I can always take apart, fully understand, and modify as I like. This inclination has served me well and is why I have the skillsets I do today.
If everyone is doing things one way, I instinctively want to explore all the other ways to train my own brain to continue to be adversarial and with a stamina to do hard experiments by hand when no tools exist to automate them yet.
Watching all my peers think more and more alike actually scares me, as they are all talking to the same LLMs. None for me, thanks.
"But this magic proprietary tool makes my job so much easier!!" has never been a compelling argument for me.
Antirez + LLM + CFO = Billion Dollar Redis company, quite plausibly.
/However/ ...
As for the delta provided by an LLM to Antirez, outside of Redis (and outside of any problem space he is already intimately familiar with), an Apples to Apples comparison would be he trying this on an equally complex codebase he has no idea about. I'll bet... what Antirez can do with Redis and LLMs (certainly useful, huge Quality of Life improvement to Antirez), he cannot even begin to do with (say) Postgres.
The only way to get there with (say) Postgres, would be to /know/ Postgres. And pretty much everyone, no matter how good, cannot get there with code-reading alone. With software at least, we need to develop a mental model of the thing by futzing about with the thing in deeply meaningful ways.
And most of us day-job grunts are in the latter spot... working in some grimy legacy multi-hundred-thousand line code-mine, full of NPM vulns, schelpping code over the wall to QA (assuming there is even a QA), and basically developing against live customers --- "learn by shipping", as they say.
I do think LLMs are wildly interesting technology, however they are poor utility for non-domain-experts. If organisations want to profit from the fully-loaded cost of LLM technology, they better also invest heavily in staff training and development.
You might feel great, thats fine, but I dont. And software quality is going down, I wouldn't agree that LLMs will help write better software
However I can’t help but notice some things that look weird/amusing:
- The exact time that many programmers were enlightened about the AI capabilities and the frequency of their posts.
- The uniform language they use in these posts. Grandiose adjectives, standard phrases like ‘it seems to me’
- And more importantly the sense of urgency and FOMO they emit. This is particularly weird for two reasons. First is that if the past has shown something regarding technology is that open source always catches up. But this is not the case yet. Second, if the premise is that we re just the in beginning all these ceremonial flows will be obsolete.
Do not get me wrong, as of today these are all valid ways to work with AI and in many domains they increase the productivity. But I really don’t get the sense of urgency.
I was able to use [AI codong agent] to achieve [task], [task] and [task] within [time]. It would not be possible to do that without it.
[My thoughts about this]
Which is the same as dozens if not hundreds of similar articles already posted here, and the comments in the discussion don't explore any new perspectives either.
I honestly don't understand why people still write and discuss these articles. While I understand the need for personal expression, nothing you possibly say is new.
This is the advice I've been giving my friends and coworkers as well for a while now. Forget the hype, just take time to test them from time to time. See where it's at. And "prepare" for what's to come, as best you can.
Another thing to consider. If you casually look into it by just reading about it, be aware that almost everything you read in "mainstream" places has been wrong in 2025. The people covering this, writing about this, producing content on this have different goals in this era. They need hits, likes, shares and reach. They don't get that with accurate reporting. And, sadly, negativity sells. It is what it is.
THe only way to get an accurate picture is to try them yourself. The earlier you do that, the better you'll be. And a note on signals: right now, a "positive" signal is more valuable for you than many "negative" ones. Read those and try to understand the what, if not the how. "I did this with cc" is much more valuable today than "x still doesn't do y reliably".
Well that's a way to put it. But not everyone enjoy the art only for the results.
I personally love learning, and by letting AI drive forward and me following, I don't learn. To learn is to be human.
So saying the fun is untouched is one-sided. Not everyone is in it for the same reasons.
Stay in plan mode most of the time. It will produce a step by step set of instructions - more context - for the LLM to execute the change. It’s the best place to exert detailed control over what will happen. Claude lets you edit it in a vim window.
Think about testing strategy carefully. Connecting the feedback back into the LLM is what makes a lot of the magic happen. But it requires thought or the LLM might cheat or you get a suboptimal result.
Then with these two you spend your time thinking in terms of product correctness - good tests - and implementation plan - deciding if the LLM has a sane grasp of the problem and will create a sane result.
You’re at a higher level of abstraction, still caring about details, but rarely finicky up to your elbows in line by line code.
If you can get good at these you’re well on your way.
--------
Let me check for a Directory.Build.props file instead, as it might contain the package versions:
well are you doing it? I'll check for the Directory.Build.props file to see if it contains the package versions:
Let me know when you're done Let me check for the Directory.Build.props file to see if it contains the package versions:
Are you currently doing anything? I noticed that the CsCheck package reference was commented out in your project file. Here's what I've done so far:
Uncommented the CsCheck package reference in your __________.Tests.Core.csproj file.
Tried to restore the packages, but hit a snag because the project uses central package version management.
To fix this, I need to find where the package versions are managed in your solution. Could you help me locate the Directory.Packages.props or Directory.Build.props file in your solution root? This is where the package versions are typically defined in .NET projects with central package management.
Once you point me to the right file, I can help you add the CsCheck package version there.
it's in Directory.Packages.props I'll help you find and update the Directory.Packages.props file. Let's locate it first.
You have already located it: it's Directory.Packages.prop in the solution root, like you said. Let me check the contents of the Directory.Packages.props file to see the current package versions:
CsCheck is already in Directory.Packages.prop I've restored the packages, and the command completed successfully. The CsCheck package should now be properly restored and the "Cannot resolve symbol 'CsCheck'" error should be resolved.
Let me know if you still see the error in your IDE. If you do, try these steps:
Close and reopen the solution
Rebuild the solution
If the error persists, let me know and we can try additional troubleshooting steps.
--------They ran out of believable arguments or never had any to begin with?
As it was said on a thread here, LLMs are search engines. The rest is religion.
If I have to do all this babysitting, is it really saving me anything other than typing the code? It hasn't felt like it yet and if anything it's scary because I need to always read the code to make sure it's valid, and reading code is harder than writing it.
They did the same with Upton Sinclair's quote, which is now used against any worker who dares to hope for salary.
There is not much creativity in the pro-LLM faction, which is guided by monetary interests and does not mind to burn its social capital in exchange for loss of credibility and money.
We currently have engineers competent enough to use an LLM, review the code written, and fix the places where the LLM writes poor code. We also still have engineers pushing novel code themselves. That means we are on the up-slope. Right now, nascent hackers are learning perhaps the old ways, but also are for sure paying attention to and using vibe coding. That creates a negative feedback loop. As greybeards age out of programming, so to does the knowledge foundation that allowed LLM training to take place in the first place, and more importantly, that trained the next generation of hackers. AI is going to increasingly begin consuming AI code, and I haven't seen solid evidence yet that it is capable (at least currently) of putting truly novel ideas into code.
There will be an inflection point where AI's are consuming their own output more than that from competent hackers, and that's when things will go downhill unless there is a significant breakthrough in actual reasoning in AI.
I've been taking a proper whack at the tree every 6 months or so. This time it seems like it might actually fall over. Every prior attempt I could barely justify spending $10-20 in API credits before it was obvious I was wasting my time. I spent $80 on tokens last night and I'm still not convinced it won't work.
Whether or not AI is morally acceptable is a debate I wish I had the luxury of engaging in. I don't think rejecting it would allow me to serve any good other than in my own mind. It's really easy to have certain views when you can afford to. Most of us don't have the privilege of rejecting the potential that this technology affords. We can complain about it but it won't change what our employers decide to do.
Walk the game theory for 5 minutes. This is a game of musical chairs. We really wish it isn't. But it is. And we need to consider the implications of that. It might be better to join the "bad guys" if you actually want to help those around you. Perhaps even become the worst bad guy and beat the rest of them to a functional Death Star. Being unemployed is not a great position to be in if you wish to assist your allies. Big picture, you could fight AI downstream by capitalizing on it near term. No one is keeping score. You might be in your own head, but you are allowed to change that whenever you want.
I think for some who are excited about AI programming, they're happy they can build a lot more things. I think for others, they're excited they can build the same amount of things, but with a lot less thinking. The agent and their code reviewers can do the thinking for them.
I wonder if I’m the odd one out or if this is a common sentiment: I don’t give a shit about building, frankly.
I like programming as a puzzle and the ability to understand a complex system. “Look at all the things I created in a weekend” sounds to me like “look at all the weight I moved by bringing a forklift to the gym!”. Even ignoring the part that there is barely a “you” in this success, there is not really any interest at all for me in the output itself.
This point is completely orthogonal to the fact that we still need to get paid to live, and in that regard I'll do what pays the bills, but I’m surprised by the amount of programmers that are completely happy with doing away with the programming part.
I review every single line and keep the increments small. I also commit often. Wouldn't want to go back to coding alone.
but I find it curious that the many will always pay for the few https://youtu.be/y12yZ7bQizk?si=Mbgg-F7IP8HJXJPz
and at what cost ? https://youtu.be/-sNKfRq1oKg?si=6m8pVM9tvawohUbm
Why not just mechanical turk the codebase? Lotsa jobs even with LLM augmentation at current state.
Where is the long term thinking of utility vs cost?
Until AI can solve its own energy generation issues, the hype is gross.
Thankfully I'll be long dead (hopefully) before a local AQI > 500 is considered the new normal common good trade for high fidelity personalized deep fake pr0n
or the cure for cancer at US healthcare billable rates.
In software, we, the developers, have increasingly been a bottleneck. The world needs WAY more software than we can economically provide, and at long last a technology has arrived that will help route around us for the benefit of humanity.
Here's an excellent Casey Handmer quote from a recent Dwarkesh episode:
> One way to think about the industrial revolutions is [...] what you're doing is you're finding some way of bypassing a constraint or bypassing a bottleneck. The bottleneck prior to what we call the Industrial Revolution was metabolism. How much oats can a human or a horse physically digest and then convert into useful mechanical output for their peasant overlord or whatever? Nowadays we would giggle to think that the amount of food we produce is meaningful in the context of the economic power of a particular country. Because 99% of the energy that we consume routes around our guts, through the gas tanks of our cars and through our aircraft and in our grids and stuff like that.
> Right now, the AI revolution is about routing around cognitive constraints, that in some ways writing, the printing press, computers, the Internet have already allowed us to do to some extent. A credit card is a good example of something that routes around a cognitive constraint of building a network of trust. It's a centralized trust.
It's a great episode, I recommend it: https://www.dwarkesh.com/p/casey-handmer
The naive view considers only the small scale ease of completing a task in isolation and expects compensation to be proportional to it. But that's not how things work. Yes abstraction makes individual tasks easier to complete, but with the extra time available more can be done, and as more is done and can be done, new complexities emerge. And as an individual can do more, the importance of trust grows as well. This is why CEO's make disproportionately more than their employees, because while the complexity of their work may scale only linearly with their position, or not at all even beyond a certain point, the impact of their decisions grows exponentially.
LLM's are just going to enhance the power and influence of software developers.
Really, one of the first things he said, sums it up:
> facts are facts, and AI is going to change programming forever.
I have been using it in a very similar manner to how he describes his workflow, and it’s already greatly improved my velocity and quality.
I also can relate to this comment:
> I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge.
Group 1 is untouched since they were writing code for the sake of writing and they have the reward of that altruism.
Group 2 are those that needed their projects to bring in some revenue so they can continue writing open-source.
Group 3 are companies that used open-source as a way to get market share from proprietary companies, using it more in a capitalistic way.
Overtime, I think groups 2 and 3 will leave open-source and group 1 will make up most of the open-source contributors. It is up to you to decide if projects like Redis would be built today with the monetary incentives gone.
The goal of the labs is to continue these leaps will get even bigger with every generation. Unless you secretly believe that some portion of the craft will be left unexplored by the labs or the things that are still relatively borked now will not be worked on or fixed later is a silly notion to me. Future versions will be easier to prompt and the tools will do more of the heavy lifting of following up and re-rolling misinterpretations. I argue that a user sleeping through all of this is likely to use a future version better than someone who is obsessing with all their assumptions on how to coerce these models to work right now, current version hyper users will likely bring unnecessary baggage imo.
For now, even with Opus 4.5 the time horizon for delivering a full-stack project is not significantly different than before, it's still limited by how much you can push it. I'd argue that someone without understanding of how things work is unlikely to succeed in getting production-grade outcomes from these current versions. The point is, if you choose to learn more and get better in understanding and building things that work (with AI or otherwise) you'll be just fine to use the versions that have fully or mostly automated the entire process. Nobody will be left behind, only those who stop building altogether.
To me the next obvious barrier will be size (context) barrier, and I can easily see a place for a human in that process. Sure, anyone can prompt an agent build a codebase, but as those code bases grow / evolve It's hard for me to believe a non-specialized person will be able to manage those projects.
edit: I had another thought after posting this. To all the smaller company devs just building and maintaining internal tools. Users always want more features. The difference is now you'll be able to deliver them.
The biggest disruption I'm seeing is in estimation. It's a skill developed with experience, and it just went poof
Saying that it doesn't matter if the stock market crashes because in the long run, the technology will create more economic value to make up for it certainly reveals the age and/or financial position of the author! When the market crashes, some people will not be able to retire, and will become a financial burden to their families. Why is this okay?
The market is a tool that has been used to socialize losses. For people who still have a lot of life left to live, the chances of recovery are much higher. For others, it's pretty terrifying.
Yes, advancements in technology often lead to significant economic gains, and we should therefore pursue them. But to say that we should pursue them regardless of the risks is shortsighted and irresponsible.
Often while trying to fall asleep, I'll be thinking something like "I need my app to do such and such".
The next day, instead of forcing myself to start coding, I can literally say to Intelij Junie (using Claude), exactly that: "I need my app to do such and such". I'm often pleasantly surprised by the outcome. And if there's anything that needs to be tweaked, I'm now in the mode of critiquing and editing.
AI is also automation but the instructions are given in a higher level language. You still have to know how to automate it. You need to instruct the machine in sufficient detail, and if done correctly the machine will once again be able to interpret your intention, transform it to a lower level code, and execute it for you.
I want AI that responds instantaneously, and in a manner perfectly suited to my particular learning style.
I want AI so elegant in its form and function that I completely take it for granted.
What I'm getting instead is something clunky, slow, and flawed. So excuse me while I remain firmly in the anti-AI crowd.
At it's core, AI has capability to extract structure/meaning from unstructured content and vice-versa. Computing systems and other machines required inputs with limited context. So far, it was a human's job to prepare that structure and context and provide it to the machines. That structure can be called as "program" or "form data" or "a sequence of steps or lever operations or button presses".
Now the machines got this AI wrapper or adapter that enables them to extract the context and structure from the natural human-formatted or messy content.
But all that works only if the input has the required amount of information and inherent structure to it. Try giving a prompt with jumbled up sequence of words. So it's still the human jobs to provide that input to the machine.
"your ability to create a mental representation of the problem to communicate to the LLM" – this is the tipping point imho. So far, you need to be good at this. That's why senior jobs are not affected yet. The question is for how long. We are probably just months away from the time when LLMs (or other form of AI) will be better at creating better "mental representation", better abstractions and better solutions, than most humans in most cases, including those in senior positions. And that will spill over to other non-dev jobs too.
And then goes on describing two things for which I bet almost anyone with enough knowledge of C and Redis could implement a POC in... Guess what? Hours.
At this point I am literally speechless, if even Antirez falls for this "you get so quick!!!" hype.
You get _some_ speed up _for things you could anyway implement_. You get past the "blank screen block" which prevents you from starting some project.
These are great useful things that AI does for you!
Shaving off _weeks_ of work? Let's come back in a couple of month when he'll have to rewrite everything that AI has written so well. Or, that code would just die away (which is another great use case for AI: throw away code).
People still don't understand that writing code is a way to understand something? Clearly you don't need to write code for a domain you already understand, or that you literally created.
What leaves me sad is that this time it is _Antirez_ that writes such things.
I have to be honest: it makes me doubt of my position, and I'll constantly reevaluate it. But man. I hope it's just a hype post for an AI product he'll release tomorrow.
Every now and then I post the same exact comment here on HN, where the heck are the products then? Or where is the better outcome? The faster software? Let alone small team competing with bigger companies?
We are NOT anti AI we're exhausted to keep reading bs from ai astroturfers or wanna be ai tech influencers. It's so exhausting it's always your fault that you're not "using the tool properly", and you're going to be left behind. I'm not anti AI I just wish the bubble will pop so instead of fighting back bs from managers that "I read that on HN" I can go back coding with and without ai where applies to my needs
Right now, there’s a limit to how widely software is adopted, largely based on software quality and cost. AI will improve software quality (for example, you can add a ton of automated tests even if you don’t use AI to develop features) and reduce the cost of building software.
That will lead to better software—and software we didn’t build in the past because it was too complex, or so niche that we weren’t sure we could make enough profit to justify the development costs. It will say also change many other industries, but I think generally for the better: more ways to create new things, more variations, and more customization for specific purposes.
Notwithstanding the above, to my understanding LLM services are currently being sold below cost.
If all of the above is true, at some point the degredation of quality in codebases that use these tools will be too expensive to ignore.
This doesn't make sense to me.
Surely if you were "quite involved step-by-step through the whole prototyping phase" you would have been able to prevent architectural mistakes being made?
What does your process really look like?
I don't "vibe code" in the sense that I have it build entire apps without looking at the code; I prompt it to write maybe about the 100-200 lines of code I need next after thinking about what they should look like.
I don't see how you get architectural issues creeping in if you do it that way.
I worry less about the model access and more about the hardwire required to run those models (i.e. do inference).
If a) the only way to compete in software development in the future is to outsource the entire implementation process to one of a few frontier models (Chinese, US or otherwise)
and b) only a few companies worldwide have the GPU power to run inference with those models in a reasonable time
then don't we already have a massive amount of centralization?
That is also something I keep wondering with agentic coding - being able to realize your epic fantasy hobby project you've on and off been thinking about for the last years in a couple of afternoons is absolutely amazing. But if you do the same with work projects, how do you solve the data protection issues? Will we all now just hand our entire production codebases to OpenAI or Anthropic etc and hope their pinky promises hold?
Or will there be a race for medium-sized companies to have their own GPU datacentets, not for production but solely for internal development and code generation?
You don’t spend weeks explaining intent, edge cases, or what I really meant to a developer. You iterate 1:1 with the system and adjust immediately when something feels off.
Well, yes. But an opinion on what is, indeed, a fact and not hype, is still an opinion.
Even flat-earthers can state that "facts are facts".
I always looked up to antirez. Redis was really taking off after I graduated and I was impressed by the whole system and the person behind it. I was impressed to see them walk away to do something different after being so successful. I was impressed to read their blog about tackling difficult problems and how they solved them.
I'm not a 10x programmer. I don't chase MVPs or shipping features. I like when my manager isn't paying attention and I can dig into a problem and just try things out. Our database queries have issues? Maybe I can write my own AST by parsing just part of the code. Things like that.
I love BUILDING, not SHIPPING. I learn and grow when I code. Maybe my job will require me to vibe code everything some day just to keep up with the juniors, but in my free time I will use AI only enough to help speed up my typing. Every vibe coded app I've made has been unmaintainable spaghetti and it takes the joy out of it. What's the point of that?
To bring it all together, I guess some part of me was disappointed to see a person that I considered a really good programmer, seem to indicate that they didn't care about doing the actual programming?
> Writing code is no longer needed for the most part
> As a programmer, I want to write more open source than ever, now.
This is the mentality of the big companies pushing AI. Write more code faster. Make more things faster. Get paid the same, understand less, get woken up in the middle of the night when your brittle AI code breaks.
Maybe that's why antirez is so prolific and I'm not.
Sometimes I wish I was a computer scientist, instead of a programmer...
I’m starting to think of AI use more like a dietary choice. Most people are omnivores. Some people are vegans. Others are maxing protein. All of them can coexist in society and while they might annoy each other if the topic comes up, for the most part it’s a personal choice.
There really should be a label on the product to let the consumer know. This should be similar to Norway that requires disclosure of retouched images. No other way can I think of to help body image issues arising from pictorial people and how they never can being in real life.
You will still need hardware to run those open models, and that avenue is far easier to contain and close than stopping code distribution. Expect the war on private/personal compute to ramp up even more significantly than ot already has.
But then how will we review each PR enough to have confidence in it?
How will we understand the overall codebase too after it gets much bigger?
Are there any better tools here other than just asking LLMs to summarize code, or flag risky code... any good "code reader" tools (like code editors but focused on this reading task)?
> However, this technology is far too important to be in the hands of a few companies.
I wholeheartedly agree 1000%. Something needs to change this landscape in the US.
Furthermore, the entire open source models being dominated by China is also problematic.
UBI gives government more control over individuals' finances, especially those without independent means. Poverty is also the result of unfair taxation, where poor people face onerous taxes while receiving less and less in return, and the wealthy avoid tax at every turn. Or that it is difficult for people to be self-employed due to red tape favouring big business. UBI does not address those issues.
UBI also centralises control at the expense of local self-determination and community engagement.
Nope. It was coding. Enjoying the process itself.
If I wanted to hand out specs and review code (which is what an AI jockey does), I'd be having fucking project managers as role models, not coders...
If A.I writes everything for you - cool, you can produce faster ? but is it really true ? if you're renting capacity ? what if costs go up, now you can't rent anymore - but you can't code anymore, the documentation is no longer there - coz mcp etc assumption that everything will be done by agents then what ?
what about the people that work on messy 'Information Systems' - things like redis - impressive but it's closed loop software just like compilers -
some smart guy back in the 80s - wrote it's always a people problem -
I hope AI leads to a Cambrian explosion of software people running their own businesses, given the force multiplier it affords. On the other hand, the jaded part of me feels that AI may lead to a consolidation into a very small set of monopolies. We'll see.
What's missing is (captured) the test of the changed software to verify the fixes solved the problem and no other problems where introduced ....
Then a analysis of the original software changes. An analysis of the test results, test cases, test evidence to ensure it is appropriate and adequate.
We have people who are running the same tasl 10 times in parallel and having one LLm write a prompt for another LLm to execute then sitting on their phone for an hour while they let the AI's battle it out. For tasks that should take 3 minutes. Then having another coding agent make a PR, update JIRA tickets, etc.
Frankly it blows my mind that so many developers have so little actual understanding of cost associated with AI.
We now have top chart hits which are soulless AI songs. It's perhaps a testament to the fact that some of these genres where this happens a lot, were already trending towards industrially produced songs with little soul in them (you know what genres these are, and it's hilarious that one of them). But most concerning to me is the idea that we'll never trust our eyes with what's true starting now.
We can't trust that someone who calls us is human, or that a photo or recording is of a real event. This was always true in some sense, but it required a ton of effort to pull off at least. Now it's going to be trivial. And for every photo depicting an actual event, there will be a thousand depicting non-events. What does that do to the most important thing we have as a society: the "shared truth"? The decay of traditional media already put a big dent in this - with catastrophic results. Ai will make it 10x worse.
But maybe we should cherish these people. Maybe it's among them we find the embryo to the resistance - people who held out when most of us were seduced - seduced into giving the machine all our knowledge, all our skills, all the secrets about us we were not even aware of ourselves - and setting it up to be orders of magnitude more intelligent than any of us, combined. And finally - just as mean, vindictive and selfish as most of the people in the training data on which it was trained.
Maybe it's good to stay skeptical a bit longer.
If programmers keep up good coding practices then the vibe coders are the ones left behind
There is additionally some kind of implicit historical recourse to the Industrial Revolution and the revolutionary politics it is associated to, where software developers, cast as the cottage industry weavers etc. are seen as walking blindly into their mass replacement by machines, with the implication that those machines will be able to be managed by de-skilled labour whose role will be simply to ensure their smooth and safe running. I think it is important to try and see things in this way but also there is a lot lacking from the analogy.
I would draw an analogy here between building software and building a home.
When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.
Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.
Seriously? If these were open source tools that anyone could run on their home PC that statement would make sense, but that's not what we are talking about here. LLMs are tools that cost massive amounts of money to operate, apparently. The tool goes away if the money goes away. Fossil fuels revolutionized the world, but only because the cost benefit made sense (at least in the relative short-term).
The reason I am anti-AI is because I believe it poses a net-negative to society overall. Not because it is inherently bad, but because of the way it is being infused into society by large corps (and eventually governments). Yes, it makes me, and other developers, more productive. And it can more quickly solve certain problems that were time consuming or laborious to solve. And it might lead to new and greater scientific and technological advances.
But those gains do not outweigh all of the negatives: concentration of power and capital into an increasingly small group, the eventual loss of untold millions of jobs (with, as of yet, not even a shred of indication of what might be replace them), the loss of skills in the next generations who are delegating much of their critical thinking (or thinking period), to ChatGPT; the loss of trust in society now that any believable video can be easily generated; the concentration of power in the the control of information if everyone is getting their info from LLMs instead of the open internet (and ultimately, potentially the death of the open internet); the explosion in energy consumption by data centers which exacerbates rather than mitigates global warming; and plenty more.
AI might allow us to find better technological solutions to world hunger, poverty, mental health, water shortages, climate change, and war. But none of those problems are technological problems; technology only plays a small part. And the really important part is being negatively exacerbated by the "AI arms race". That's why I, who was my whole life a technological optimist, am no longer hopeful for the future. I wish I was.
This is the crux. AI suddenly became good and society hasn't caught on yet. Programmers are a bit ahead of the curve here, being closer to the action of AI. But in a couple of years, if not already, all the other technical and office jobs will be equally affected. Translators, admin, marketing, scientists, writers of all sorts and on and on. Will we just produce more and retain a similar level of employment, or will AI be such a force multiplier that a significant number or even most of these jobs will be gone? Nobody knows yet.
And yet, what I'm even more worried about for their society upending abilities, is robots. These are coming soon and they'll arrive with just as much suddeness and inertia as AI did.
The robots will be as smart as the AI running them, so what happens when they're cheap and smart enough to replace humans in nearly all physical jobs?
Nobody knows the answer to this. But in 5 years, or 10, we will find out.
This is already happening.
AI had an impact on simplest coding first, this is self-evident. So any impact it had, had to be on the quantity of software created, and only then on its quality and/or complexity. And mobile apps are/were a tedious job with a lot of scaffolding and a lot of "blanks to fill" to make them work and get accepted by stores. So first thing that had to skyrocket in numbers with the arrival of AI, had to be mobile apps.
But the number of apps on Apple Store is essentially flat and rate of increase is barely distinguishable from the past years, +7% instead of +5%. Not even visible.
Apparently the world doesn't need/can't make monetisable use of much more software than it already does. Demand wasn't quite satisfied say 5 years ago, but the gap wasn't huge. It is now covered many times over.
Which means, most of us will probably never get another job/gig after the current one - and if it's over, it's over and not worth trying anymore - the scraps that are left of the market are not worth the effort.
There's also a short-termism aspect of AI generated code that's seemingly not addressed as much. Don't pee your pants in the winter to keep warm.
That's fine if he feels that way, but he can only speak for himself, not for all the copyright holders of the other code that was "ingested" to power LLMs.
If you want to see how most creators who care about their work and actually own it (unlike most software), look at many book authors and illustrators. Many of whom have a burning hatred for AI bros not only stealing their work, but also then using it to destroy the livelihoods of their field.
A lot of the techbros who do care about their work aren't feeling as wronged or threatened, because we're trying to pivot to get a piece of the pie, from all the exploitation and pillaging of many fields.
AI is both a near-perfect propaganda machine and, in the programming front, a self-fulfilling prophecy: yes, AI will be better at coding than human. Mostly because humans are made worse by using AI.
I think there are some negative consequences to this; perhaps a new form of burn out. With the force multiplier and assisted learning utility comes a substantial increase in opportunity cost.
If I can run an agent on my machine, with no remote backend required, the problem is solved. But right now, aren't all developers throwing themselves into agentic software development betting that these services will always be available to them at a relatively low cost?
Show me these "facts"
It definitely can.
The innovation that was the open, social web of 20 years ago? still an option, but drowned between closed ad-fueled toxic gardens and drained by AI illegal copy bots.
The innovation that was democracy? Purposely under attack in every single place it still exists today.
Insulin at almost no cost (because it costs next to nothing to produce)? Out of the question for people that live under the regime of pharmaceutical corporations that are not reigned by government, by collective rules.
So, a technology that has a dubious ROI over the energy and water and land consumed, incites illegal activities and suicides, and that is in the process of killing the consumer public IT market for the next 5 years if not more, because one unprofitable company without solid verifiable prospects managed to pass dubious orders with unproven money that lock memory components for unproven data centers... yes, it definitely can be taken back.
No, I really don't think they will. Software has only been getting worse, and LLMs are accelerating the rate at which incompetent developers can pump out low quality code they don't understand and can't possibly improve.
I want to write less, just knowing that LLM models are going to be trained on my code is making me feel more strongly than ever that my open source contributions will simply be stolen.
Am I wrong to feel this? Is anyone else concerned about this? We've already seen some pretty strong evidence of this with Tailwind.
Imo its to hard for companies to get infra into a place where text can be an interface. IaC is mostly an aspiration beyond a certain scale ime, which is close enough to interacting with infra through text.
You will not find such a government. They're here for a different purpose
I’ve written complete GUIs in 3D on the front end. This GUI was non traditional. It allows you to playback, pause speed up, slow down and rewind a gps track like a movie. There is real time color changing and drawing of the track as the playback occurs.
Using mapbox to do this straight would be to slow. I told the AI to optimize it by going straight into shader extensions for mapbox to optimize GPU code.
Make no mistake. LLMs are incredible for things that are non systems based that require interaction with 3D and GUIs.
There is enough evidence to support claims that AI is a black hole where money gets evaporated.
It’s great that you can delegate some tasks to it now and not have to write all of the code yourself. There is some evidence showing that it doesn’t benefit junior developers nearly as much. If you didn’t generate the specification test that demonstrates the concurrency issue you were trying to solve in Redis but you read the code it generated and understood it then you didn’t need to learn anything. How is a junior developer who has never solved such problems supposed to learn so they can do the same thing?
But worse, UBI and such are the solutions of libertarian oligarchs that dream of a world without people, according to Doctorow and I think he’s right. It seems like the author also wants this? He doesn’t seem to know what will happen to the jobless but we should vote in some one who will start a government program to take care of them. How long until the author is replaced as well?
Lastly… who’s “hyping” anti-AI and what do they gain from making false claims?
I think the real problem for programming is when these companies all collapse and take the rest of the economy down with them… are there going to be enough programmers left to maintain everything? Or will we be sifting though the mountains of tech debt never to see the light of day again?
> However, this technology is far too important to be in the hands of a few companies.
This is the most important assessment and we should all heed this warning with great care. If we think hyperscalers are bad, imagine what happens if they control and dictate the entire future.
Our cellphones are prisons. We have no fundamental control, and we can't freely distribute software amongst ourselves. Everything flows through funnels of control and monitoring. The entire internet and all of technology could soon become the same.
We need to bust this open now or face a future where we are truly serfs.
I'm excited by AI and I love what it can do, but we are in a mortally precarious position.
Said by someone who spent his career writing code, it lacks a bit of details... a more correct way to phrase it is: "if you're already an expert in good coding, now you can use these tools to skip most of code writing"
LLMs today are mostly some kind of "fill-in-the-blanks automation". As a coder, you try to create constraints (define types for typechecking constraints, define tests for testing constraints, define the general ideas you want the LLM to code because you already know about the domain and how coding works), then you let the model "fill-in the blanks" and you regularly check that all tests pass, etc
And no, my work as redteam IT sec. is completely unrelated :D
Great news if you know the current generation of languages, you won't need to learn a new one for quite some time.
Who is going to control AI? The people in power obviously. The will buy all of the computers so running models locally will no longer be feasible. In case it hasn’t been obvious that this is already happening. It will only get worse.
They will not let themselves be taxed.
But who will buy the things the people in power produce if nobody has a job?
This is how civilization collapses.
What I would really urge people to avoid doing is listening to what any tech influencer has to say, including antirez. I really don't care what famous developers think about this technology, and it doesn't influence my own experience of it. People should try out whatever they're comfortable with, and make up their own opinions, instead of listening what anyone else has to say about it. This applies to anything, of course, but it's particularly important for the technology bubble we're currently in.
It's unfortunate that some voices are louder than others in this parasocial web we've built. Those with larger loudspeakers should be conscious of this fact, and moderate their output responsibly. It starts by not telling people what to do.
There's still no point. Resharper and clang-tidy still have more value than all LLMs. It's not just a hype, it's a bloody cult, right besides those nft and church of COVID people.
Programmers are simply accepting whatever the owner class does to them [3] and calling it Technological Determinism, even if just indirectly.
> But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.
Token gestures:
> What is the social soltion, then? Innovation can't be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless.
Innovation can't be taken back (see: technological determinism; tech people are powerless to affect anything) so we should... vote for good governments. That are willing to support those who remain jobless.[0]
Keyword “willing”. Take away people's political leverage to strike. Now they may have no wealth. What are they to do? What is their political influence? The non-answer is to hope that the government will be WILLING to support their existence.
> And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection.
The more people get fired the less people with political leverage. The realpolitic trend would be the opposite of what is written here.
> But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition,
Every progress made in science can be artificially restricted. See foodstuff. We could apparently distribute enough to feed the world but that doesn't make as much money as throwing a lot of it away.
Progress for any given individual can be non existent unless it is evenly distributed.
> which is not always happy.
At least the article looks completely organic in terms off writing
Genre: I Have Anecdotes About AI And If You Don't See What I'm Seeing You Are Misguided.[4 ]
[1] Not a vocation. Simply the obvervation that the famous and respected programmers will have more weight outside their niche simply because of who they are.
[2] Basic Income hails from the right libertarian tradition. Leave the rich alone, give the commoner enough crumbs to survive. Later it was romanticized as a way for former programmers to go to their evergreen pastures of endless sideprjects.
[3] https://news.ycombinator.com/item?id=46526137
[4] https://fly.io/blog/youre-all-nuts/
[0] Let's vote and hope that Italy doesn't get a fascist prime minister next time.
I am sorry, but this is incredibly naïve. Governments don't work that way. It reflects a lack of social awareness. "People getting fired" in 2026 is not the same as it was even 10 years ago. The society has changed, losing a job today is demonstrably more dangerous.
This is akin to saying "Sure, thousands of houses will burn down, but the more houses burn down, the more political pressure there will be". Why do we have to wait for the houses to burn down?
> Sociologist Judy Wajcman wrote about the concept of how tech is speeding up tasks precisely like this article describes, however she observed that it has never quite manifested as more free time for the laborer.
Every time I read blogs or tweets or posts like this, this point becomes more and more apparent. The authors are constantly explaining how they were busy with all their work, without time to implement the less important or side-project like things. The point of the post is often that now they can invest whatever free time they had into doing so, thus doing more work than they did before. I have literally never read such a post where the author explains how they have automated away their job and are now working less than before they started using AI.
I think this is a great point to ponder as we continue on this path of overworking and labor value destruction, and not the naive benevolent socialism that the authors all assume will occur magically.
Full story at 11.
> AI code is slop, therefore you shouldn't use it
You should learn how to responsibly use it as a tool, not a replacement for you. This can be done, people are doing it, people like Salvatore (antirez), Mitchell (of Terraform/Ghostty fame), Simon (swillison) and many others are publicly talking about it.
> AI can't code XYZ
It's not all-or-nothing. Use it where it works for you, don't use it where it doesn't. And btw, do check that you actually described the problem well. Slop-in, slop-out. Not sayin' this is always the case, but turns out it's the case surprisingly often. Just sayin'
> AI will atrophy your skills, or prevent you from learning new ones, therefore you shouldn't use it
Again, you should know where and how to use it. Don't tune out while doing coding. Don't just skim the generated code. Be curious, take your time. This is entirely up to you.
> AI takes away the fun part (coding) and intensifies the boring (management)
I love programming but TBH, for non-toy projects that need to go into production, at least three quarters are boring boilerplate. And making that part interesting is one of the worst things you can do in software development! That path lies resume-driven development, architecture astronautics, abusing design patterns du jour, and other sins that will make code maintenance on that thing a nightmare! You want boring, stable, simple. AI excels at that. Then you can focus on the small tiny bit that's fun and hand-craft that!
Also, you can always code for fun. Many people with boring coding jobs code for fun in the evenings. AI changes nothing here (except possibly improving the day job drudgery).
> AI is financially unsustainable, companies are losing money
Perhaps, and we're probably in the bubble. Doesn't detract from the fact that these things exist, are here now, work. OpenAI and Anthropic can go out of business tomorrow, the few TB of weights will be easily reused by someone else. The tech will stay.
> AI steals your open source code, therefore you shouldn't write open-source
Well, use AI to write your closed-source code. You don't need to open source anything if you're worried someone (AI or human) will steal it. If you don't want to use something on moral grounds, that's a perfectly fine thing to do. Others may have different opinion on this.
> AI will kill your open source business, therefore you shouldn't write open-source
Open source is not a business model (I've been saying this for longer than median user of this site has been alive). AI doesn't change that.
As @antirez points out, you can use AI or not, but don't go hiding under a rock and then being surprised in a few years when you come out and find the software development profession completely unrecognizable.