Maybe it’s because my approach is much closer to a Product Engineer than a Software Engineer, but code output is rarely the reason why projects that I worked on are delayed. All my productivity issues can attributed to poor specifications, or problems that someone just threw over the wall. Every time I’m blocked is because someone didn’t make a decision on something, or no one has thought further enough to see this decision was needed.
It irks me so much when I see the managers of adjacent teams pushing for AI coding tools when the only thing the developers know about the project is what was written in the current JIRA ticket.
My experience has been the opposite. I've enjoyed working on hobby projects more than ever, because so many of the boring and often blocking aspects of programming are sped up. You get to focus more on higher level choices and overall design and code quality, rather than searching specific usages of libraries or applying other minutiae. Learning is accelerated and the loop of making choices and seeing code generated for them, is a bit addictive.
I'm mostly worried that it might not take long for me to be a hindrance in the loop more than anything. For now I still have better overall design sense than AI, but it's already much better than I am at producing code for many common tasks. If AI develops more overall insight and sense, and the ability to handle larger code bases, it's not hard to imagine a world where I no longer even look at or know what code is written.
Similar with GPS and navigation. When you read a map, you learn how to localise yourself based on landmarks you see. You tend to get an understanding of where you are, where you want to go and how to go there. But if you follow the navigation system that tells you "turn right", "continue straight", "turn right", then again you lose intuition. I have seen people following their navigation system around two blocks to finally end up right next to where they started. The navigation system was inefficient, and with some intuition they could have said "oh actually it's right behind us, this navigation is bad".
Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase. Or that instead of writing a complex task in your codebase, you could contribute a patch to a dependency and it would make it much simpler (e.g. because the dependency already has this logic internally and you could just expose it instead of rewriting it). But it requires an understanding of those dependencies: do you have access to their code in the first place (either because they are open source or belong to your company)?
Those AIs obviously help writing code. But do they help getting an understanding of the codebase to the point where you build intuition that can be leveraged to improve the project? Not sure.
Is it necessary, though? I don't think so: the tendency is that software becomes more and more profitable by becoming worse and worse. AI may just help writing more profitable worse code, but faster. If we can screw the consumers faster and get more money from them, that's a win, I guess.
I also have a large collection of handwritten family letters going back over 100 years. I've scanned many of them, but I want to transcribe them to text. The job is daunting, so I ran them through some GPT apps for handwriting recognition. GPT did an astonishing job and at first blush, I thought the problem was solved. But on deeper inspection I found that while the transcriptions sounded reasonable and accurate, significant portions were hallucinated or missing. Ok, I said, I just have to review each transcription for accuracy. Well, reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in. I'm a very fast typist and the process doesn't take long. Plus, I get to read every letter from beginning to end while I'm working. It's fun.
So after several years of periodically experimenting with the latest LLM tools, I still haven't found a use for them in my personal life and hobbies. I'm not sure what the future world of engineering and art will look like, but I suspect it will be very different.
My wife spins wool to make yarn, then knits it into clothing. She doesn't worry much about how the clothing is styled because it's the physical process of working intimately with her hands and the raw materials that she finds satisfying. She is staying close to the fundamental process of building clothing. Now that there are machines for manufacturing fibers, fabrics and garments, her skill isn't required, but our society has grown dependent on the machines and the infrastructure needed to keep them operating. We would be helpless and naked if those were lost.
Likewise, with LLM coding, developers will no longer develop the skills needed to design or "architect" complex information processing systems, just as no one bothers to learn assembly language anymore. But those are things that someone or something must still know about. Relegating that essential role to a LLM seems like a risky move for the future of our technological civilization.
https://www.fictionpress.com/s/3353977/1/The-End-of-Creative...
Some existential objections occur; how sure are we that there isn't an infinite regress of ever deeper games to explore? Can we claim that every game has an enjoyment-nullifying hack yet to discover with no exceptions? If pampered pet animals don't appear to experience the boredom we anticipate is coming for us, is the expectation completely wrong?
The one part that I believe will still be essential is understanding the code. It's one thing to use Claude as a (self-driving) car, where you delegate the actual driving but still understand the roads being taken. (Both for learning and for validating that the route is in fact correct)
It's another thing to treat it like a teleporter, where you tell it a destination and then are magically beamed to a location that sort of looks like that destination, with no way to understand how you got there or if this is really the right place.
The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.
Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.
Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.
AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.
I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).
So the OP was in a bad place without Claude anyways (in industry at least).
This realization is the true bitter one for many engineers.
The models all have their specific innate knowledge of the programming ecosystem from the point in time where their last training data was collected. However, unlike humans, they cannot update that knowledge unless a new finetuning is performed - and even then, they can only learn about new libraries that are already in widespread use.
So if everyone now shifts to Vibe Coding, will this now mean that software ecosystems effectively become frozen? New libraries cannot gain popularity because AIs won't use them in code and AIs won't start to use them because they aren't popular.
New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
This seems completely out of whack with my experience of AI coding. I'm definitely in the "it's extremely useful" camp but there's no way I would describe its code as high quality and efficient. It can do simple tasks but it often gets things just completely wrong, or takes a noob-level approach (e.g. O(N) instead of O(1)).
Is there some trick to this that I don't know? Because personally I would love it if AI could do some of the grunt work for me. I do enjoy programming but not all programming.
All this went away. I felt a loss of joy and nostalgia for it. It was bitter.
Not bad, but bitter.
The bitter lesson is going to be for junior engineers who see less job offers and don’t see consulting power houses eat their lunch.
At least we have one person who understands it in details: the one who wrote it.
But with AI-generated code, it feels like nobody writes it anymore: everybody reviews. Not only we don't like to review, but we don't do it well. And if you want to review it thoroughly, you may as well write it. Many open source maintainers will tell you that many times, it's faster for them to write the code than to review a PR from a stranger they don't trust.
My only gripe is that the models are still pretty slow, and that discourages iteration and experimentation. I can’t wait for the day a Claude 3.5 grade model with 1000 tok/s speed releases, this will be a total game changer for me. Gemini 2.5 recently came closer, but it’s still not there.
We have both been using or integrating AI code support tools since they became available and both writing code (usually Python) for 20+ years.
We both agree that windsurf + claude is our default IDE/Env now on. We also agree that for all future projects we think we can likely cut the number of engineers needed by 1/3rd.
Based on what I’ve been using for the last year professionally (copilot) and on the side, I’m confident I could build faster, better and with less effort with 5 engineers and AI tools as with 10 or 15. Also communication overhead reduces by 3x which prevents slowdowns.
So if I have a HA 5 layer stack application (fe, be, analytics, train/inference, networking/data mgt) with IPCs between them, instead of one senior and two juniors per process for a total of 15 people, I only need the 5 mid-seniors now.
Compared what you see from game jams where sometimes solo devs create whole games in just a few days it was pretty trash.
It also tracks with my own experience. Yes, cursor quickly helps me get the first 80% done but then I spent so much time cleaning after it that I have barely saved any time in total.
For personal projects where you don't care about code quality I can see it as a great tool. If you actual have professional standards, no. (Except maybe for unit tests, I hate writing those by hand.)
Most of the current limitation CAN be solved by throwing even more compute at it. Absolutely. The question is will it economically make sense? Maybe if fusion becomes viable some day but currently with the end of fossil fuels and climate change? Is generative Ai worth destroying our planet for?
At some point the energy consumption of generative AI might get so high and expensive that you might be better off just letting humans do the work.
Why would this be the exception?
>This is the exact same feeling I’m left with after a few days of using Claude Code.
For me what matters is the end result, not the mere act of writing code. What I enjoy is solving problems and building stuff. Writing code is a part.
I would gladly use a tool to speed up that part.
But from my testing, unless the task is very simple and trivial, using AI isn't always a walk in the park, simple and efficient.
Most professional software development hasn't been fun for years, mostly because of all the required ceremony around it. But it doesn't matter, for your hobby projects you can do what you want and it's up to you how much you let AI change that.
Thought it’s ok to use new for object literal in JS.
That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
I feel the same with a lot of points made here, but hadn't yet thought about the financial one.
When I started out with web development that was one of the things I really loved. Anyone can just read about html, css and Javascript and get started with any kind of free to use code editor.
Though you can still do just that, it seems like you would always drag behind the 'cool guys' using AI.
With today's AI, driven by code examples it was trained on, it seems more likely to be able to do a good job of optimization in many cases than to have gleaned the principles of conquering complexity, writing bug-free code that is easy and flexible to modify, etc. To be able to learn these "journeyman skills" an LLM would need to either have access to a large number of LARGE projects (not just Stack Overflow snippets) and/or the thought processes (typically not written down) of why certain design decisions were made for a given project.
So, at least for time being, as a developer wielding AI as a tool, I think we can still have the satisfaction of the higher level design (which may be unwise to leave to the AI, until it is better able to reason and learn), while leaving the drudgework (& a little bit of the fun) of coding to the tool. In any case we can still have the satisfaction of dreaming something up and making it real.
My (naive?) assumption is that all of this will come down: the price (eventually free) and the energy costs.
Then again, may daughters know I am Pollyanna (someone has to be).
The hardware for AI is getting cheaper and more efficient, and the models are getting less wasteful too.
Just a few years ago GPT-3.5 used to be a secret sauce running on the most expensive GPU racks, and now models beating it are available with open weights and run on high end consumer hardware. Few iterations down the line good-enough models will run on average hardware.
When that Xcom game came out, filmmaking, 3D graphics, and machine learning required super expensive hardware out of reach of most people. Now you can find objectively better hardware literally in the trash.
I don't regard programming as merely the act of outputing code. Planning, architecting, having a high level overview, keeping the objective in focus also matters.
Even if we regard programming as just writing code, we have to ask ourselves why we do it.
We plant cereals to be able to eat. At first we used some primitive stone tools to dig the fields. Then we used bronze tools, then iron tools. Then we employed horses to plough the fields more efficiently. Then we used tractors.
Our goal was to eat, not to plough the fields.
Many objects are mass produced now while they were the craft of the artisans centuries ago. We still have craftsmen who enjoy doing things by hand and whose products command a big premium over mass market products.
I don't have an issue if most of the code will be written by AI tools, provided that code is efficient and does exactly what we need. We will still have to manage and verify those tools, and to do that we will still have to understand the whole stack from the very bottom - digital gates and circuits to the highest abstractions.
AI is just another tool in the toolbox. Some carpenters like to use very simple hand tools while other swear by the most modern ones like CNC.
Many of us do write code for fun, but that results in a skewed perspective where we don’t realize how inaccessible it is for most people. Programmers are providers of expensive professional services and only businesses that spread the costs over many customers can afford us.
So if anything, these new tools will make some kinds of bespoke software development more accessible to people who couldn’t afford professional help before.
Although, most people don’t need to write new code at all. Using either free software or buying off-the-shelf software (such as from an app store) works fine for most people in most situations. Personal, customized software is a niche.
Even before AI really took of that was an experience many developers, including me, had. Outsourcing has taken over much of the industry. If you work in the west, there is a good probability that a large part of your work is managing remote teams, often in India or other low cost countries.
What AI could change is either reducing the value of outsourcing or make software development so accessible that managing the outsourcing becomes unnecessary.
Either way, I do believe that Software Developers are here to stay. They won't be writing much code in any case. A software developer in the US costs 100k a year and writing software simply will never again be worth 100k year. There are people and programs who are much cheaper.
Don't get me wrong, it lets me be more productive sometimes but people that think the days of humans programming computers are numbered have a very rosy (and naive) view of the software engineering world, in my opinion.
Forty-six percent of the global population has never hired a human programmer either because a good human programmer costs more than $5 a day{{citation needed}}.
It is also useful for learning from independent code snippets, for e.g., learning a new API.
> It makes economic sense, and capitalism is not sentimental.
I find this kind of fatalism irritating. If capitalism isn't doing what we as humans want it to do, we can change it.
for some reason he also included a import for "resolve from dns".
(the code didn't even need a promise there)
As for: ” In some countries, more than 90% of the population lives on less than $5 per day.”
Well, with the orders of magnitude difference already in place, this is not going to meaningfully impact that at all.
Im not dismissing this: I’m saying that it isn’t much of a building block in thinking about all of the things AI is going to change and should be addressed as a result because it’s simply in the pile of problems labeled “was here before, will be here after”.
And really, it ought to be thought of in the context of “can we leverage AI to help address this problem in ways we cannot do so now?”