And it is great. It does produce fixes, produce a facimilie of understanding. It answers my questions, and is often right. And tinkering with the process of it is satisfying. Integrating more and more data, writing better specs, you can get better results. Its tempting to think that it could be sustainable, this way of working, but also so scary to lose the understanding, to not have the confidence in how things work. Finding duplicated stacks using different libraries, or even the same library, is becoming more and more common. Even our debugging tools, our tracing grow fragmented and unstandardized.
I liked the old way of working. It was fun for me, if often frustrating. It was solving hard sudoku on the train. This new way is lower friction, but more stress. It's steering a rocket ship using chopsticks to hold the wheel. You desperately want to slow things down and work methodically, to be sure, and safe. But you won't get anywhere near as far if you do that.
Somewhere quiet, the tech debt demon smiles.
This has not been my experience. Sure it feels like more work to fix the AI code problems sometimes - it is a different skillset than writing code from scratch. But the speed that I can deliver software has significantly increased by using coding agents.
But the fact is this is not how it is. Every competent developer I know is delivering significantly more after being AI enabled.
Anyone seriously using the tools without a chip on their shoulder is going to say the same.
Are the tools delivering perfect code 100% of the time, no, of course not. But that's the new skill. Guiding them so they deliver good enough code at 5-50x the velocity. As the models improve and the ecosystem tries out new workflows, the skill changes and the output gets better and better.
What we're capable of delivering now is incredible and would have been unimaginable just a few years ago.
Probably because they mandate its adoption. And while there are plenty of developers who will happily comply and see it as a good thing. There are others who will do it because they have to or risk losing their jobs.
It's a bit of a silly thing to claim. "We made everyone use it, so they did, and now adoption is going up!"
And I used to love my work :(
,,you can outsource your thinking but not your understanding''
There's just no way to not generate much more amount of code with LLMs than we would do as humans, so well structuring code gets much more important than ever before.
Just yesterday I was interviewing for a very interesting job and I completely flunked the coding question in an unacceptable way for my level of experience. The question was easy, I just couldn't get past some syntactic issues. For 8 months, Claude wrote all of my Python classes and Pydantic types. Now I had to write a dataclass, and because I always just resorted to standard classes before the advent of LLMs, I stumbled. And froze. And panicked. And that was it. Of course you could say I should have just scrapped the dataclass and written it as a simple class. The point is I felt very, very stupid. LLMs suddenly felt like a huge disadvantage.
All this to say I disagree with LLMs "rotting" my brain. Quite the opposite, I know that it's possible to use LLMs to be efficient and correct. It's more the actual mechanical act of writing that gets rusty.
But now with all the vibe coders and agentic coding, I pretty much lost a lot of the interest. I sometimes receive PRs to review of thousands of code where it's clear that they were from some AI and never even tested to begin with, why should I as a reviewer do that for you? If you want to use any AI, at least make sure that it works as it's supposed to, since I'll already have to go through all the code that you didn't write, and likely didn't even read yourself.
Then similarly when I have to build something I sometimes use AI, but it's like cheating, and reducing my coding ability, I can already feel that. But at the end I think that the business just wants that, so I use that, and start to care less about the output, the quality, the whole architecture, so thanks to AI I'm putting less effort, I let AI work for me, while I do other things. Maybe that's the way
"The thing I always love is when there's an intellectual challenge that when you master it gives you practical abilities."
Tangential capabilities from mentally challenging tasks are your personal differentiator - regardless of what tools you use or what you're working on. Running 5 miles puts you way ahead of the person who rode their e-bike for 20. Performing the hard work pays off - not always monetarily, but like exercise, it's uplifting and healthy, and provides long-term advantages.
Corporations need differentiators as well - hopefully they rely on their creative and innovative employees to stand out, regardless of tooling.
Today I learned about a more elegant helper method in Apache Commons' StringUtils library for Java.
The function was `trimToNull()`
Normally I would have just done
if (StringUtils.isBlank(foo)) {
responseDTO.setFoo(null);
}
Now I can just do responseDTO.setFoo(trimToNull(foo));I had written the original code, Claude suggested the improvement.
I enjoy shipping code and reviewing what Claude writes.
To add to that, what I find most helpful is the boring stuff, the JIRA cleanup, trawling Wikis and other sources to find out what the historical context of something was.
Normally that would take me all day to do, with a 30 minute code change.
Now I can do that in about 15 minutes and think about building or shipping some tool which I never had time to do.
I got into software engineering because I was always fascinated by getting computers to do stuff, and I really enjoyed the manual task of programming. It's been a dream to earn a living doing something I would do in my spare time. I was pretty good at it too.
I'm not having fun any more, so I've decided to leave the field and become a teacher. I won't earn nearly as much money but I expect to feel more fulfilled, and I hope I can help make a difference to some young people.
I've had an extraordinarily privileged career, and many people never get the luxury of enjoying their work at all. But I'd rather try to enjoy what I do day to day than persist in something that's lost its spark.
You can also use it for regurgitating manuals, but generative AI for coding is counterproductive. Only the tool and gaming addicted people like it and pretend to be more productive, for which there is no public evidence. I don't see any software improving at any faster rate.
It seems like they're overgeneralizing quite a bit here and focusing on a narrow subset of the population while ignoring the people who are actually thriving with their new AI-enabled dev workflows.
LLMs are not a panacea by any means and they have lots of cons. But I for one would find it difficult to go back to a world where I can't lean on LLMs in my day-to-day.
One very specific example that could not possibly contribute to the brainrot mentioned in this article: AI saves time and reduces the headache of having to pore through pages of documentation (if there even is any) to find how that one method works or what arguments it can take. This alone is immensely helpful and can keep you in a state of flow instead of sending you off on a potentially fruitless side quest that derails your whole train of thought.
It's also taken me quite a bit of time, effort, and experimentation to find the right tools and the right ways to work AI into my workflows which I would bet that the developers mentioned in this article have not explored too deeply if at all.
Claiming AI is rotting your brain because you can't one-shot an entire app or even a single feature is a straw man fallacy.
Experienced mental pains I never felt with any other activity except watching tiktok reels for hours.
Got into points of no returns on numerous side projects, ai slop neither ai or myself could touch.
I've developed a better mental loop. I simply review every lines of code it spits out, and refine the loop to get less code produced. But always demand the full file again.
I commit each change. And inspect the diff for review.
I don't feel drain or pain.
LLMs still aren't standalone developers, but they can be tamed to execute well on well defined scope. If we review what they do, every time.
I have also worked in customer support for some time and I have found that huge problem for some people (often times developers) is that they are lacking theory of mind. Like they literally can't comprehend that I don't see into their heads and they need to articulate their question with correct context otherwise I can't help them.
AI is like a litmus test for it. People who have theory of mind, are capable of putting together a question which will give them good results out of AI. On the other hand people who are struggling with the fact that AI can't see what you mean unless it is in a context window will have bad time with it. These people also usually suck in managing other people because - once again - they are unable to provide tasks with enough context and properly set boundaries. At best they will give you some vague poorly defined tasks and get mad when you will do it differently than they had in their mind.