All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...
In a nutshell, if the rest of it is like this, it simply sucks.
All said, it’s hard on me knowing it’s possible to use llm to spit out a crappy but functional version of whatever I’ve dreamt up with out satisfaction of building it. Yet, it also seems to now be demotivating to spend the time crafting it when I know I could use llm to do a majority of it. So, I’m in a mental quagmire, this past year has been the first year since at least 2000 that I haven’t built anything significant in scale. It’s indirectly ruining the fun for me for some reason. Kind of just venting but curious if anyone else feels this way too?
As far as knowledge/experience, I worry about a day where "vibe coding" takes over the world and it's only the greybeards that have any clue WTF is going on. Probably profitable, but also sounds like a hellscape to me.
I would hate to be a junior right now.
I agree with the author here, but my worry is that by leaning on the LLMs, the very experience that allows me to uniquely leverage the LLMs now will start to atrophy and in a few years time I'll be relying on them just to keep up.
I am not going to spend half an hour coming up with that prompt, tweaking it, and then spend many hours (on the optimistic side) to track down all the hallucinated code and hidden bugs. Have been there once, never going to do that again.
I'd rather do it myself to have a piece of mind.
I start every piece of work, green or brown, with a markdown file that often contains my plan, task breakdown, data models (including key fields), API / function details, and sample responses.
For the tool part, though, I took a slightly different approach. I decided to use Rust primarily for all my projects, as the compile-time checks are a great way to ensure the correctness of the generated code. I have noticed many more errors are detected in AI-generated Rust code than in any other language. I am happy about it because these are errors that I would have missed in other languages.
If it’s high surprise then there’s a greater chance that you can’t tell right code from wrong code. I try to reframe this in a more positive light by calling it “exploration”, where you can ask follow up questions and hopefully learn about a subject you started knowing little about. But it’s important for you to realize which mode you are in, whether you are in familiar or unfamiliar waters.
https://royalicing.com/2025/infinite-bicycles-for-the-mind
The other benefit an experienced developer can bring is using test-driven development to guide and constrain the generated code. It’s like a contract that must be fulfilled, and TDD lets you switch between using an LLM or hand crafting code depending on how you feel or the AI’s competency at the task. If you have a workflow of writing a test beforehand it helps with either path.
Basically the state of the art right now can turn me into an an architect/CTO that spends a lot of time complaining about poor architectural choices. Crucially Claude does not quite understand how to greenfield implement good architectures. 3.7 is also JUST . SO. CHATTY. It’s better than 3.5, but more annoying.
Gemini 2.5 needs one more round of coding tuning; it’s excellent, has longer context and is much better at arch, but still occasionally misformats or forgets things.
Upshot — my hobby coding can now be ‘hobby startup making’ if I’m willing to complain a lot, or write out the scaffolding and requirements docs. It provides nearly no serotonin boost from getting into flow and delivering something awesome, but it does let me watch YouTube on the side while it codes.
Decisions..
1. Is the company providing the model willing to indemnify _your_ company when using code generation? I know GitHub Copilot will do this with the models they provide on their hardware, but if you’re using Claude Code or Cursor with random models do they provide equal guarantees? If not I wonder if it’s only a matter of time before that landmine explodes.
2. In the US, AFAICT, software that is mostly generated by non-humans is not copyrightable. This is not an issue if you’re creating code snippets from an LLM, but if you’re generating an entire project this way then none or only small parts of the code base you generate would then be copyrightable. Do you still own the IP if it’s not copyrightable? What if someone exfiltrates your software? Do you have no or little remedy?
Senior developers have the experience to think through and plan out a new application for an AI to write. Unfortunately a lot of us are bogged down by working our day jobs, but we need to dedicate time to create our own apps with AI.
Building a personal brand is never more important, so I envision a future where dev's have a personal website with thumbnail links (like a fancy youtube thumbnail) to all the small apps they have built. Dozens of them, maybe hundreds, all with beautiful or modern UIs. The prompt they used can be the new form of blog articles. At least that's what I plan to do.
First, I can still use neovim which is a massive plus for me. Second it’s been pretty awesome to offload tasks. I can say something like “write some unit tests for this file, here are some edge cases I’m particularly concerned about” then I just let it run and continue with something else. Come back a few mins later to see what it came up with. It’s a fun way to work.
I find it quite interesting how we can do a very large chunk of the work up front in design, in order to automate the rest of the work. Its almost as if waterfall was the better pattern all along, but we just lacked the tools at that time to make it work out.
“This is especially noteworthy because I don’t actually know Python. Yes, with 25+ years of software development experience, I could probably write a few lines of working Python code if pressed — but I don’t truly know the language. I lack the muscle memory and intimate knowledge of its conventions and best practices.”
You should not use AI to just “do” the hard job, since as many have mentioned, it does it poorly and sloppy. Use AI to quickly learn the advantages and disadvantages of the language, then you do not have to navigate through documentation to learn everything, just validate what the AI outputs. All is contextual, and since you know what you want in high level, use AI to help you understand the language.
This costs speed yes, but I have more control and gain knowledge about the language I chose.
I'm blasting through tickets, leaving more time to tutor and help junior colleagues and do refactoring. Guiding them has then been a multiplier, and also a bit of an eye opener about how little real guidance they've been getting up until now. I didn't realise how resource constrained we'd been as a team leading to not enough time guiding and helping them.
I don't trust the tools with writing code very often but they are very good at architecture questions, outputting sample code etc. Supercharged google
As a generalist, I feel less overwhelmed
It's probably been the most enjoyable month at this job.
I know Python, but have been coding in Go for the last few years. So I'm thinking how I'd implement this in Go.
There's a lot of code there. Do you think it's a lot, or it doesn't matter? It seems reasonably clear though, easy to understand.
I'd have expected better documentation/in-line comments. Is that something that you did/didn't specify?
Actually coding is a relatively small part of my job. I could use an LLM for the others parts but my employer does not appreciate being given word salad.
> For controllers, I might include a small amount of essential details like the route name: [code]
Commit history: https://github.com/dx-tooling/platform-problem-monitoring-co...
Look, I honestly think this is a fair article and some good examples, but what is with this inane “I didn’t write any of it myself” claim that is clearly false that every one of these articles keeps bringing up?
What’s wrong with the fact you did write some code as part of it? You clearly did.
So weird.
Coding by prompt is the next lowering of the bar and vibe coding even more so. Totally great in some scenarios and adds noise in others.
No, the article was just something about enjoying AI. This is hardly anything related to senior software developer skills.
1. Clearly define requirements
2. Clearly sketch architecture
3. Setup code tool suite
4. Let AI agent write the remaining code
Is better price-performance than going lighter on 1-3 and instead of 4, spending that time writing the code yourself with heavy input from LLM autocomplete, which is what LLMs are elite at.
The agent will definitely(?) write the code faster, but quality and understanding (tech debt) can suffer.
IOW the real takeaway is that knowing the requirements, architecture, and tooling is where the value is. LLM Agent value is dubious.
It also makes coding a lot less painful because I'm not making typos or weird errors (since so much code autocompletes) that I spend less time debugging too.
Those lamenting the loss of manual programming: we are free to hone our skills on personal projects, but for corporate/consulting work, you cannot ignore 5x speed advantage. It's over. AI-assisted coding won.
I just hope that most hiring managers now realize this. With AI the productivity of younger developers has gone up by a factor of 10x, but the productivity of us "Seasoned" developers has gone up 100x. This now evens the playing field, I hope, where us experienced guys will be given a fair shake in the hiring process rather than what's been happening for decades where the 20-somethings pretend to be interviewing the older guys, because some boss told them to, but they never had any actual intentions of hiring anyone over 40, just on the bases of age alone, even if some older guy aces the interview.
1. Do piddly algorithm type stuff that I've done 1000x times and isn't complicated. (Could take or leave this, often more work than just doing it from scratch)
2. Pasting in gigantic error messages or log files to help diagnose what's going wrong. (HIGHLY recommend.)
3. Give it high level general requirements for a problem, and discuss POTENTIAL strategies instead of actually asking it to solve the problem. This usually allows me to dig down and come up with a good plan for whatever I'm doing quickly. (This is where real value is for me, personally.)
This allows me to quickly zero in on a solution, but more importantly, it helps me zero in strategically too with less trial and error. It let's me have an in-person whiteboard meeting (as I can paste images/text to discuss too) where I've got someone else to bounce ideas off of.
I love it.
We all know how big companies handle software, if it works ship it. Basically once this shit starts becoming very mainstream companies will want to shift into their 5x modes (for their oh so holy investors that need to see stock go up, obviously.)
So once this sloppy prototype is seen as working they will just ship the shit sandwhich prototype. And the developers won’t know what the hell it means so when something breaks in the future, and that is when not if. They will need AI to fix it for them, cause once again they do not understand what is going on.
What I’m seeing here is you proposing replacing one of your legs with AI and letting it do all the heavy lifting, just so you can lift heavier things for the moment.
Once this bubble crumbles the technical debt will be big enough to sink companies, I won’t feel sorry for any of the AI boosties but do for their families that will go into poverty
I have a business which is turning in millions in ARR at the moment (made in the pandemic) it's a pest control business and we have got a small team with only 1 experienced senior engineer, we used to have 5 but with AI we reduced it to one which we are still paying well.
Even with maintenance, we plan ahead for this with an LLM and make changes accordingly.
I think we will see more organizations opting for smaller teams and reducing engineer count since now the code generated is to the point that it works, it speeds up development and that it is "good enough".
Other devs will say things like "AI is just a stupid glorified autocomplete, it will never be able to handle my Very Special Unique Codebase. I even spent 20 minutes one time trying out Cursor, and it just failed"
Nope, you're just not that good obviously. I am literally 10x more productive at this point. Sprint goals have become single afternoons. If you are not tuned in to what's going on here and embracing it, you are going to be completely obsolete in the next 6 months unless you are some extremely niche high level expert. It wont be a dramatic moment where anyone gets "fired for AI". Orgs will just simply not replace people through attrition when they see productivity staying the same (or even increasing) as headcount goes down.
Also, Keyframing can be done in a more autonomous fashion. Sr Engineers can truly vibe code if they setup a proper framework for themselves. Key framing as described in the article is too manual.