If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
1. The thing to be written is available online. AI is a search engine to find it, maybe also translate it to the language of choice.
2. The thing (system or component or function) is genuinely new. The spec has to be very precise and the AI is just doing the typing. This is, at best working around syntax issues, such as some hard-to-remember particular SQL syntax or something like that. The languages should be better.
3. It‘s neither new nor available online but a lot to type out and modify. The AI does all the boilerplate. This is a failure of the frameworks and languages to require so much boilerplate.
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
I sometimes dread writing code that's in a state of bad disrepair or is overly complex, think a lot of the "enterprise" code out there - it got so bad that I more or less quit a job over it, though never really stated that publicly, alongside my mind going dark places when you have pressure to succeed but the circumstances are stacked against you.
For a while I had a few Markdown files that went into detail exactly why I hated it, in addition to also being able to point my finger at a few people responsible for it. I tried approaching it professionally, but it never changed and the suggestions and complaints largely fell on deaf ears. Obviously I've learnt that while you can try to provide suggestions, some people and circumstances will never change, often it's about culture fit.
But yeah, outsource all of that to AI, don't even look back. Your sanity is worth more than that.
I can use AI to help me explore libraries or to replace a search, generate small snippets here and there, or even scripts that i occasionally need. But i can't vibecode, i don't know how to let go, i babysit too much, i read the code and i feel uneasy if I don't understand what I'm building, or why I'm building it in a certain way, i need to understand how the pieces work to make a whole
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?
Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
[1] https://news.ycombinator.com/item?id=44159648
[2] You can let real nations come to mind here
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
The reason Claude code or Cursor feels addictive even if it makes mistakes is better illustrated in this post - https://x.com/cryptocyberia/status/2014380759956471820?s=46
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture, or perhaps build it, would like to do that.
I can’t really relate but I can understand it.
It absolutely is.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
For me, LLMs have been a tremendous boon for me in terms of learning.
Succinctly: process over product.
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
So I'll take the horse to work from now on.
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
I almost never agree with the names Claude chooses, i despise the comments it adds every other line despite me telling it over and over and over not to, oftentimes i catch the silly bugs that look fine at first glance when you just let Claude write its output direct to the file.
It feels like a good balance, to me. Nobody on my team is working drastically faster than me, with or without AI. It very obviously slows down my boss (who just doesn't pay attention and has to rework everything twice) or some of the juniors (who don't sufficiently understand the problem to begin with). I'll be more productive then them even if i am hand-writing most of the code. So i don't feel threatened by this idea that "hand written code will be something nobody does professionally here soon" -- like the article said, if I'm responsible for the code i submit, I'm still the bottleneck, AI or not. The time i spend writing my own code is time I'm not poring over AI output trying to verify that it's actually correct, and for now that's a good trade.
Bean counters don't care about creativity and art though, so they'll never get it.
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.