That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.
After using LLMs for a while, I have to admit it's pretty nice, and I like using it. I've been vibecoding a few apps, and it's a good dopamine hit to immediately see your ideas come to life. However, based on my experience, it will bite you if you trust it blindly. Even in my vibecoded projects, it keeps adding "features" without me asking for them. Since they're just pet projects, I don’t really care as long as the end result is what I'm expecting, but I don’t think companies will be as flexible. I also don't think customers would like it if features changed or got added with every new fix or update.
So this could go in a bunch of different directions from here, but to summarize the current situation:
A lot of companies are heading in this direction.
Without proper engineering, AI will easily write more code and potentially change the application unintentionally.
We will have fewer junior engineers entering the market because of fear around AI and reduced hiring.
AI usage will hit a critical point where it is making massive amounts of changes, and the people "prompting" it might start getting overwhelmed.
We will end up with more features that people have to keep in their heads. I don’t think we can trust LLMs 100%, and because of that, developers will still need to know exactly what the application does.
Eventually, there will be a lot of bugs, and developers will complain that we need additional human resources.
Hiring starts again.
I think, right now, the toughest position is for new developers, and the best position is for people already in the market.I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.
Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.
That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.
I just can't understand this as a programmer. I use AI a lot as well when I program but I still write a lot of the code by myself just because it is easier to write the code I want, specially if I know what I want, than to make AI understand what kind of implementation I'm thinking of.
https://pchalasani.github.io/claude-code-tools/plugins-detai...
I love coding, it always felt like Legos for adults. Not that Legos aren't also Legos for adults.
But there's no fighting the fact that we won't be writing 99% of the code anymore so I take pleasure in crafting the specs and requirements clearly, that's where I put the effort.
And then to avoid having to babysit the agents to get them to stick to the plan, I built a super robust external orchestrator that forces multiple review and fix rounds until I get the result I want.
I'll be fully open sourcing that soon also https://engine.build
But why would anyone use AI to write documents or articles? Do you really respect your recipients so little that you can't be bothered to share your own thoughts?
I might as well get an AI to call my own mother on mother's day.
I've learned an insane amount in a very short period of time, and have been engaging in much more challenging problems.
Instead of "what's the right syntax for this for loop again?" I'm asking "what's the business critical module in this system and how do I structure the test suite to prove it's working to spec?"
I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices.
When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language.
Also useful: writing the constraint before the session, not after the failure. "The auth state should be checked on the route level rather than the component" becomes quite clear once you see an agent applying the same rule in three slightly different ways in two files. Writing down the constraint beforehand allows you to detect the violation; rubber-stamping achieves nothing.
What really multiplies in value is not prompting but understanding your own system enough to prompt the agent properly.
I think, if you're not feeling challenged, you're probably just doing the same work but faster. You should try to tackle harder problems, too!
What's actually happened is you took a break from a highly technical skill. Every person on the planet will "forget" some part of that technical skill if they don't use it in a while. But the information is not gone, it's just been de-prioritized for other more pertinent information. The information comes back once you give yourself a refresher.
Before AI, it would be months between me writing a full program in one of several languages. I would forget simple things like how to start a function definition. But I did not really forget, because after a quick glance at an existing function, I remembered all the other possible syntax in the function definition. There's no need to panic, your brain is working normally.
I find myself learning exponentially faster and more. For example, I am working with spectroscopy hardware currently (raman, nmr) where I got Claude to write code that interfaces with equipment on a hardware level. Instead of me going through data sheets and writing out a bunch of wrapper code, Claude did it for me.
I am able to progress much faster by using Claude to discuss various techniques, implement them, and test it out. This loop would have probably taken me 5-10x more time previously.
And I am learning so much more about these machines/techniques/data than I would have if I had to expend the mental effort to write menial code just to see a result.
I have more than a decade of experience as a developer. I am glad that we are finally moving towards a world where we can utilize code as a tool rather than constantly trying to think how to make it into a product.
I think it's vital that you keep strict control, and really try to understand what the AI is doing. And especially when you're doing something really complex, even Claude Opus can get lost or lose track of the context, and you need to be paying attention when that happens.
I'm at the other spectrum of what the author feels. I feel smarter and more capable with AI, and I'm actually surprised how helpful it is in my workflow. I still write code by hand but I know way more than I would without it.
Granted, I'm the "accidental programmer in a team that's completely non technical" and AI is simply a senior I'd never have otherwise. YMMV but I think if you use the tool as a more expressive Google search it can be a great companion.
Pure vibe coding is not far from "let's outsource everything", it's just a bit cheaper and more available.
I unironically believe this is a very good habit. When it comes to writing, instead of starting with AI, finish a chapter by hand first then ask AI to review it strikes the best balance.
Also I feel like it’s fine to let AI write your code. I felt very much like the OP did. A couple of things help keep my sanity. one is that as developers I think our job has evolved to knowing what decision an AI makes is good and which one is bad, this can be code or design – but there is nowhere a developer(or for that matter a knowledge worker) can hide from ai. In this world you will be forced to communicate with them. Partly because as a community we have decided(for better or worst) that AI should bring non trivial amounts of productivity gains to software development.
The other one is something I want to validate which is for those of us who are mediocre at coding, it might be a gift because it would free up some time and thus mind space to consider what we are actually good at.
This is revisionist nonsense.
Programmers used to be cowboys, by and large, outside a handful of critical domains. Systematic use of code review, automated tests, source control and so on are relatively new.
What was different is an entire program could fit in one person's head. The stack of abstractions wasn't nearly as deep, necessarily, since you couldn't afford the cost in memory and CPU.
That delivered a different kind of intellectual control, a kind that is exceedingly rare nowadays outside hobby projects.
That approach is making me either lazy/efficient cause I use it to my work
Working in /plan mode bouncing ideas back and forth with the AI, me catching its wrong assumptions, it filling in knowledge gaps with a clear explanation when needed, is very intellectually stimulating and I think is making me a better engineer. The key for me has been to be socratic with the AI, think through everything it is proposing carefully, and don't get hypnotized by its confidence, perfectly structured arguments, etc.
I personally think it can be a great tool for learning but it's so easy to fall into the trap of getting AI to do everything for you.
I've also used it for personal projects like a Chip8 emulator I wrote in C where I'd managed to run a few basic games and ran out of steam. Used AI to help me implement the rest.
I am try to be coding at all times on complex issues while I am offloading a boring, non architecture, boiler plate heavye etc. task to it in the background in a git worktree.
I ask it to work in small iterations and commit every step of the way. After my coding session is done I can go back and review it's code.
I find that AI fails at things that are truly creative. I have been thoroughly unimpressed with ideas it has had or things it’s written for me. There’s still a lot of room for human creativity.
* actually writing more on my own - created a personal blog just to get myself to write more
* upleveling my thinking - think more about problems and framing
* leverage my experience - guide (or sometimes force) the AI assistant to leverage my experience to avoid problems
* learning new things - rather than let AI just replace things I can do, I use AI to help me learn new things/technology faster than I would have pre-AI
Yes -- now let's talk about the correct form of fighting back.
It is not "I don't want to feel self-doubt so I will suppress that feeling."
It is, "The self-doubt is valuable -- it's pushing me to improve."
The AI is never going to be able to say what you really mean. But it may inspire you to push harder to improve your ability to do that.
Im still concerned enough about the specifics to show concern about background refresh tokens silently failing in OAuth in a mission critical real-time system.
Im not coding it, but im still thinking it. That's the important part, ain't it? Is it dumb or just clever delegation?
This thing will explode in our faces sooner or later. Also makes me feel like an imposter rather than an engineer.
Maybe that’s actually what I have become.
"4. When the answer feels done, you stop thinking"
You need to spend time on coding without agents and writing without AI as practice if nothing else.
You should not get complacent in offloading all detail oriented work to agents.
And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?
I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.
Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.
Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.
I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.
Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.
Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.
This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.
When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?
I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.
During the "don't make me think" era of software design, if you wanted to make software you got really good at identifying the use case and using design thinking to optimize the paths to goal. You could make a business around a very narrow set of flows. The only thinking a user had to do was pick The App for That. They never had to think about how they want to approach their task, which is a skill in itself.
AI isn't like that. There's a million ways to use it. That's a big part of what makes it cool, but it requires the user to thoughtfully approach their workflows. Not everyone is used to doing that.
We'll have AGI not because AI is getting smarter, but because we are getting dumber.
If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.
If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.
If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.
I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.
Maybe step-by-step is the approach to use AI and not feel dumber.
Basically only use it for anything I wouldn’t otherwise have the time to write but isn’t important to be written by me.
I actually can’t fathom using it for writing as a principle, to me it’s just a keyboard extension for code generation, never a replacement for the written word that should be in my voice and fully a stream coming from my mind that I should have full editorial awareness and memory of.
Now that I think about it I’m a snob in this regard, I turn my nose up at people that use AI to write things that are purely written, in my mind using it for writing is defeating the purpose of writing!
However, if I were to release a solution that I 'vibe-coded' into the wild, then I would feel quite a bit of shame if someone figured out that I used an LLM to write the entire thing. I know it may come off as a bit silly, but it is a feeling I cannot seem to shake. A feeling that prevents me from wanting to adopting the technology in full force because... Well, I did not truly create the software if AI did all the work. Sure, the software might have been my idea, but that does not bring me much fulfillment.
I know programming is just a means to an end, but I feel like I have put in a lot of hard work over the past decade and a half just to barely scratch the surface of mediocrity. I was attracted to this field because I saw a sense of beauty in computer science (and programming). It felt like one of the few remaining options for a creative job that was spared from the cutthroat nature of the a career in the arts.
Like the Samurai class during the early industrialization of Japan, maybe it's time for me to lay down my sword too.
The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.
I am less and less fond of this work.
I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.
Every day, I'm more and more leaning into changing industry.
I love code and programming and solving product problems. But the job has changed dramatically.
If the pay+comfort ratio wasn't that good I would've done that already.
It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.
Based on the MIT and MSFT studies.
1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.
2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?
3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.
I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.
Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).
Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.
And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.
So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)
Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.
And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.
I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.
I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.
Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's
If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.
I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.
Most people, given a nail gun, cant build a house, thats where the skill is...
Im not someone whose validation came from the lines of code, but from the resulting working system.