Letting a robot write code for me, however tedious it would be to write manually, made me feel like I was working in someone else's codebase. It reminds me of launching a videogame and letting someone else play through the boring parts. I might as well not be playing. Why bother at all?
I understand this behaviour if you're working for a company on some miserable product, but not for personal projects.
LLM-agents have made making products, especially small ones, a lot easier, but sacrifice much of the crafting of details and, if the project is small enough, the architecture. I've certainly enjoyed using them a lot over the last year and a half, but I've come to really miss fully wrapping my head around a problem, having intimate knowledge of the details of the system, and taking pride in every little detail.
edit - an interesting facet of AI progress is that the split between these two types of work gets more and more granular. It has led me to actively be aware of what I'm doing as I work, and to critically examine whether certain mechanics are inherently toilistic or creative. I realized that a LOT of what I do feels creative but isn't - the manner in which I type, the way I shape and format code. It's more in the manner of catharsis than creation.
Just don't expect to run a successful restaurant based on it.
I legitimately enjoy scale practice when I'm playing piano. In a similar way I've always found some pleasure in writing boilerplate and refactoring.
There is joy/peace and instructive value in the "boring" parts of almost every discipline. It's perhaps more meditative and subtle, but still very much there in abundance. And primes you much better for a real flow state.
Ever since AI exploded at my day job, I haven't legitimately been in anything resembling a programming flow state at work.
More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.
For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.
This is too low level. You’d be better off describing the things that need testing and asking for it to do red/green test-driven development (TDD). Then you’ll know all the tests are needed, and it’ll decide what tests to write without your intervention, and make them pass while you sip coffee :)
> I don’t trust it yet is when code must be copy pasted.
Ask it to perform the copy-paste using code - have it write and execute a quick script. You can review the script before it runs and that will make sure it can’t alter details on the way through.
AI takes the craft out of being an IC. IMO less enjoyable.
AI takes the human management out of being an EM. IMO way more enjoyable.
Now I can direct large-scope endeavors and 100% of my time is spent on product vision and making executive decisions. No sob stories. No performance reviews. Just pure creative execution.
I'm excited to work on more things that I've been curious about for a long time but didn't have the time/energy to focus on.
I’m working on library code in zig, and it’s very nice to have AI write the FFI interface with python. That’s not technically difficult or high risk, but it is tedious and boring.
Realistically having a helper to get me over slumps like that has been amazing for my personal productivity.
I feel more like a software producer or director than an engineer though.
These kind of pain points usually indicated too much of or a wrong architecture. Being able to fee these kind of things when the clanker does the work is a thing we must think about.
the boilerplate stuff is spot on though. the 10-type dispatch pattern is exactly where i gave up doing it manually
I hate writing proposals. It's the most mind numbing and repetitive work which also requires scrutinizing a lot of details.
But now I've built a full proposal pipeline, skills, etc that goes from "I want to create a proposal" (it collects all the info i need, creates a folder in google drive, I add all the supporting docs, and it generates a react page, uses code to calculate numbers in tables, and builds an absolutely beautiful react-to-pdf PDF file.
I have a comprehensive document outline all the work our company's ever done, made from analyzing all past proposals and past work in google drive, and the model references that when weaving in our past performance/clients.
It is wonderful. I can now just say things like "remove this module from the total cost" and without having to edit various parts of the document (like with hand-editing code). Claude (or anything else) will just update the "code" for the proposal (which is a JSON file) and the new proposal is ready, with perfect formatting, perfect numbers, perfect tables, everything.
So I can stay high level thinking about "analyze this module again, how much dev time would we need?" etc. and it just updates things.
If you'd like me to do something like this with your company, get in touch :) I'm starting to think (as of this week) others will benefit from this too and can be a good consulting engagement.
Uh, no. The happy path is the easy part with little to no thinking required. Edge cases and error handling is where we have to think hardest and learn the most.
> That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers.
With AI, I feel I'm less caught up in the minutia of programming and have more cognitive space for the fun parts: engineering systems, designing interfaces and improving parts of a codebase.
I don't mind this new world. I was never too attached to my ability to pump out boilerplate at a rapid pace. What I like is engineering and this new AI world allows me to explore new approaches and connect ideas faster than I've ever been able to before.
https://lighthouseapp.io/blog/introducing-lighthouse
It looks like a vibe coded website.
What's worse, the more I rely on the bot, the less my internal model of the code base is reinforced. Every problem the bot solves, no matter how small, doesn't feel like a problem I solved and understanding I'd gained, it feels like I used a cheat code to skip the level. And passively reviewing the bot's output is no substitute for actively engaging with the code yourself. I can feel the brainrot set in bit by bit. It's like I'm Bastian making wishes on AURYN and losing a memory with every wish. I might get a raw-numbers productivity boost now, but at what cost later?
I get the feeling that the people who go on about how much fun AI coding is either don't actually enjoy programming or are engaging in pick-me behavior for companies with AI-use KPIs.
My work often entails tweaking, fixing, extending of some fairly complex products and libraries, and AI will explain various internal mechanisms and logic of those products to me while producing the necessary artifacts.
Sure my resulting understanding is shallow, but shallow precedes deep, and without an AI "tutor", the exploration would be a lot more frustrating and hit-and-miss.
imo, this isn't paranoid at all, and it very likely filters through the LLM, unless you provide a tool/skill and explicit instructions. Even then you're rolling the dice, and the diff will have to be checked.
...just not for users.
I run 17 products as an indie maker. AI absolutely helps me ship faster — I can prototype in hours what used to take days. But the understanding gap is real. I've caught myself debugging AI-generated code where I didn't fully grok the failure mode because I didn't write the happy path.
My compromise: I let AI handle the first pass on boilerplate, but I manually write anything that touches money, auth, or data integrity. Those are the places where understanding isn't optional.