My thoughts on vibe coding vs production code:
- vibe coding can 100% get you to a PoC/MVP probably 10x faster than pre LLMs
- This is partly b/c it is good at things I'm not good at (e.g. front end design)
- But then I need to go in and double check performance, correctness, information flow, security etc
- The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)
- The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs
- Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)
So overall, this is why I think we're getting wildly different reports on how effective vibe coding is. If you've never built a data pipeline and a LLM can spin one up in a few minutes, you think it's magic. But if you've spent years debugging complicated trading or compliance data pipelines you realize that the LLM is saving you some time but not 10x time.
When an agent takes a shortcut early on, the next step doesn't know it was a shortcut. It just builds on whatever it was handed. And then the step after that does the same thing. So by hour 80 you're sitting there trying to fix what looks like a UI bug and you realize the actual problem is three layers back. You're not doing the "hard 20%." You're paying interest on shortcuts you didn't even know were taken. (As I type this I'm having flashbacks to helping my kid build lego sets.)
The author figured this out by accident. He stopped prompting and opened Figma to design what he actually wanted. That's the move. He broke the chain before the next stage could build on it. The 100 hours is what it costs when you don't do that.
I know it's not the point of this article, but really?
The author accidentally proved it: the moment they stopped prompting and opened Figma to actually design what they wanted, Claude nailed the implementation. The bottleneck was NEVER the code generation, it was the thinking that had to happen BEFORE ever generating that code. It sounds like most of you offload the thinking to AFTER the complexity has arisen when the real pattern is frontloading the architectural thinking BEFORE a single line of code is generated.
Most of the 100-hour gap is architecture and design work that was always going to take time. AI is never going to eliminate that work if you want production grade software. But when harnessed correctly it can make you dramatically faster at the thinking itself, you just have to actually use it as a thinking partner and not just a code monkey.
There's some 80-20:ness to all programming, but with current state of the art coding models, the distribution is the most extreme it's ever been.
Honestly, seeing all the dumb code that it produces, calling this thing "intelligent" is rather generous...
I needed it, I quickly build it myself for myself, and for myself only.
Expecting a one-shot 1.0 release is unrealistic because the sheer volume of context and decision-making required for a finished product is enormous.
Instead, I think of LLMs as being trained on the "delta" of software development: the pull requests, review comments, and issue discussions that move a project from one version to the next.
When you use an LLM for coding, you are effectively tapping into the collective output of a team of developers and a crowd of users. My mental model has shifted accordingly: I no longer try to be the "coder." Instead, I act as the PR reviewer and the passionate power user. My job is to point out edge cases and refine the output, rather than expecting a finished product in one go.
It’s making me a better maintainer and a more precise communicator, even if the "100-hour gap" to production remains a reality.
Human-written code usually has a consistent set of mistakes. You learn the author's blind spots and can predict where bugs live. AI-generated code is more uniformly correct on the surface but the mistakes are random and uncorrelated. There's no pattern to grep for. The code looks clean and idiomatic but occasionally does something subtly wrong in a way that has no relationship to the surrounding lines.
The 100 hour gap the author describes is real, but there's likely a second gap nobody's accounting for yet: the maintenance gap. It shows up the first time someone who didn't write the prompt needs to fix a production issue at 2am and can't reason about why the code was written the way it was, because the "why" lived in a conversation that was never saved.
When we start selling the software, and asking people to pay for/depend upon our product, the rules change -substantially.
Whenever we take a class or see a demo, they always use carefully curated examples, to make whatever they are teaching, seem absurdly simple. That's what you are seeing, when folks demonstrate how "easy" some new tech is.
A couple of days ago, I visited a friend's office. He runs an Internet Tech company, that builds sites, does SEO, does hosting, provides miscellaneous tech services, etc.
He was going absolutely nuts with OpenClaw. He was demonstrating basically rewiring his entire company, with it. He was really excited.
On my way out, I quietly dropped by the desk of his #2; a competent, sober young lady that I respect a lot, and whispered "Make sure you back things up."
I've always said, the easiest part of building software is "making something work." The hardest part is building software that can sustain many iterations of development. This requires abstracting things out appropriately which LLMs are only moderately decent at and most vibe coders are horrible at. Great software engineers can architect a system and then prompt an LLM to build out various components of the system and create a sustainable codebase. This takes time an attention in a world of vibe coders that are less and less inclined to give their vibe coded products the attention they deserve.
It sped me up (and genuinely helped with some ideas) but not 10x.
The bits I didn't design myself I definitely needed to inspect and improve before the ever eager busy beaver drove them to the ground.
That said, I'm definitely impressed by how a frontier model can "reason" about Go code that's building an AST to generate other Go code, and clearly separate what's available at generation time vs. at runtime. There's some sophistication there, and I found myself telling them often "this is the kind of code I want to generate, build the AST."
I also appreciated how faster models are good enough at slightly fuzzy find and replace. Like I need to do this refactor, I did two samples of it here, can you do these other 400? I have these test cases in language X, converted 2, can you do the other 100? Even these simple things saved me a lot of time.
In return I got something that can translate SQLite compiled to Wasm into 500k lines of Go in about a month of my spare time.
"when used appropriately" means:
- Setting up guardrails: use a statically typed language, linters, CLAUDE.md/skills for best practices.
- Told to do research when making technical decisions, e.g. "look online for prior art" or "do research and compare libraries for X"
- Told to prioritize quality and maintainability over speed. Saying we have no deadline, no budget, etc.
- Given extensive documentation for any libraries/APIs it is using. Usually I will do this as a pre-processing step, e.g. "look at 50 pages of docs for Y and distill it into a skill"
- Given feedback loops to check its work
- Has external systems constraining it from making shortcuts, e.g. "ratchet" checks to make sure it can't add lint suppressions, `unsafe` blocks, etc.
And, the most important things:
- An operator who knows how to write good code. You aren't going to get a good UI/app unless you can tell it what that means. E.g. telling it to prioritize native HTML/CSS over JS, avoiding complexity like Redux, adding animations but focus on usability, make sure the UI is accessible, etc.
- An operator who is steering it to produce a good plan. Not only to make sure that you are building the right thing, but also you are explaining how to test it, other properties it should have (monitoring/observability, latency, availability, etc.)
A lot of this comes down to "put the right things in the context/plan". If you aren't doing that, then of course you're going to get bad output from an LLM. Just like you would get bad output from a dev if you said "build me X" without further elaboration.
Also this article uses 'pfp' like it's a word, I can't figure out what it means.
I'm able to vibe code simple apps in 30 minutes, polish it in four hours and now I've been enjoying it for 2 months.
So many people are just shouting ‘I wanna go fast’ and completely forgetting the lessons learned over the past few decades. Something is going to crash and burn, eventually.
I say this as a daily LLM user, albeit a user with a very skeptical view of anything the LLM puts in front of me.
The result worked but that's just a hacked together prototype. I showed it to a few people back then and they said I should turn it into a real app.
To turn it into a full multi user scaleable product... I'm still at it a year later. Turns out it's really hard!
I look at the comments about weekend apps. And I have some of those too, but to create a real actual valuable bug free MVP. It takes work no matter what you do.
Sure, I can build apps way faster now. I spent months learning how to use ai. I did a refactor back in may that was a disaster. The models back then were markedly worse and it rewrote my app effectively destroying it. I sat at my desk for 12 hours a day for 2 weeks trying to unpick that mess.
Since December things have definitely gotten better. I can run an agent up to 8 hours unattended, testing every little thing and produce working code quite often.
But there is still a long way to go to produce quality.
Most of the reason it's taking this long is that the agent can't solve the design and infra problems on its own. I end up going down one path, realising there is another way and backtracking. If I accepted everything the ai wanted, then finishing would be impossible.
And then there is one guy, a friend of mine, who is planning to release a "submit a bug report, we will fix it immediately" feature (so, collect error report from a user, possibly interview them, then assess if its a bug or not with a "product owner LLM", and then autonomously do it, and if it passes the tests - merge and push to prod - all under one hour. Thats for a mid cap company, for their client-facing product. F*** hell! I have a full bag of bug reports ready for when this hits prod :->
(emphasis added)
Not sure if it was actually written by hand or AI was glossed over, but as soon as giving away money was on the table, the author seems to have ditched AI.
I built 4 AI products to hundreds of thousands of users, working with AI agents as collaborators, not autopilots. The difference isn't the tool. It's whether you can tell the AI is wrong and stop it before it wastes 10 hours going down the wrong path.
The author watched Claude create new S3 buckets for several rounds before catching it. An experienced engineer catches that on the first diff. Most of those 100 hours were spent not knowing you're lost.
"Vibecoding" as a concept is the problem. It implies you can vibe your way through engineering. You can't. AI is a force multiplier, not a replacement for knowing what good looks like.
I expected OP to actually 'learn' devops, but what they did was just asking LLMs to do everything.
Also...
> 180+ paid $2 for a dino
People pays $2 for an image of dinosaur with human face?
Used Codex for the whole project. At first I used claude for the architect of the backend since thats where I usually work and got experience in. The code runner and API endpoints were easy to create for the first prototype. But then it got to the UI and here's where sh1t got real. The first UI was in react though I had specifically told it to use Vue. The code editor and output window were a mess in terms of height, there was too much space between the editor and the output window and no matter how much time I spent prompting it and explaining to it, it just never got it right. Got tired and opened figma, used it to refine it to what I wanted. Shared the code it generated to github, cloned the code locally then told codex to copy the design and finally it got it right.
Then came the hosting where I wanted the code runner endpoint to be in a docker container for security purpose since someone could execute malicious code that took over the server if I just hosted it without some protection and here it kept selecting out of date docker images. Had to manually guide it again on what I needed. Finally deployed and got it working especially with a domain name. Shared it with a few friends and they suggested some UI fixes which took some time.
For the runner security hardening I used Deepseek and claude to generate a list of code that I could run to show potential issues and despite codex showing all was fine, was able to uncover a number of issues then here is where it got weird, it started arguing with me despite showing all the issues present. So I compiled all the issues in one document, shared the dockerfile and linux secomp config tile with claude and the also issues document. It gave me a list of fixes for the docker file to help with security hardening which I shared back with codex and that's when it fixed them.
Currently most of the issues were resolved but the whole process took me a whole week and I am still not yet done, was working most evenings. So I agree that you cannot create a usable product used by lots of users in 30 minutes not unless it's some static website. It's too much work of constant testing and iteration.
Some people seem to be better at it than others. I see a huge gulf in what people can do. Oddly there is a correlation between was a good engineer pre AI and can vibe code well.
But I see one odd thing. A subset of those who people would consider good or even amazing pre AI struggle. The best I can tell at this stage is because they lacked get int good results with unskilled workers in the past and just relied on their own skills to carry the project.
AI coders can do some amazing things. But at this stage you have to be careful about how you guide it down a path in the same way you did with junior engineers. I am not making a comparison to AI being junior, they by far can code better than most senior engineers, and have access to knowledge at lighting speed.
I shipped a React Native app recently and probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app, and testing edge cases on old devices. None of that is the fun part. None of it shows up in a demo. But it's the difference between "works on my machine" and "works in production."
Vibecoding gets you to the demo. The gap is everything after that.
Even pretty massive companies like databricks don't think about those things and basically have a UI template library that they then compose all their interfaces from. Nothing fancy. Its all about features, and LLM create copious amounts of features.
As we move from tailors to big box stores I think we have to get used to getting what we get, rather than feeling we can nitpick every single detail.
I'd also be more interested in how his 3rd, 4th or 5th vibe coded app goes.
Something much closer to production SDLC patterns than a Figma mockup.
The old rules still apply mainly.
Before LLMs the slow part was writing code. Now the slow part is validating whether the generated code is actually correct.
those are not copies, they aren't even features. usually part of a tiny feature that barely works only in demo.
with all vibe coding in the world today you still need at least 6 months full time to build a nice note taking app.
If we are talking something more difficult - it will be years - or you will need a team and it will still take a long time.
Everything less will result in an unusable product that works only for demo and has 80% churn.
The interesting shift seems to be that building the first version is no longer the bottleneck — distribution, UX polish and reliability are.
There are plenty of ways to code and use code, which-ever works for you is good just improve on it and make it more effective. I have multiple screens on my computer, i don't like jumping back and fourth opening tabs and browsers so i have my set up the best way that works for me. As for the AI models, they are not going to be that helpful to you if you don't understand why its doing what its doing in a particular function or crate (in case of rust) or library. I imagine the the over the top coder that has years of experience and multiple knowledge in various languages and depth knowledge in libraries, using the same technique he can replace a whole Department by himself technically.
The only thing he needed to code was an NFT wrapper, which presumably is just forking an existing NFT wholesale.
The interesting, user-facing part of the project isn't code at all! It's just an HTML front end on someone else's image generator and a "pay me" button.
Very disappointing.
EXCEPT... you've just vibe coded the first 90 percent of the product, so completing the remaining 10 percent will take WAY longer than normal because the developers have to work with spaghetti mess.
And right there this guy has shown exactly how little people who are not software developers with experience understand about building software.
To have a polished software project, you must spend time somewhat menially iterating and refining (as each type of user).
To have a polished software project, you need to have started with tests and test coverage from the start for the UI, too.
Writing tests later is not as good.
I have taken a number of projects from a sloppy vibe coded prototype to 100% test coverage. Modern coding llm agents are good at writing just enough tests for 100% coverage.
But 100% test coverage doesn't mean that it's quality software, that it's fuzzed, or that it's formally verified.
Quality software requires extensive manual testing, iteration, and revision.
I haven't even reviewed this specific project; it's possible that the author developed a quality (CLI?) UI without e2e tests in so much time?
Was the process for this more like "vibe coding" or "pair programming with an LLM"?
I would say the remaining 10% are about how robust your solution is - anything associated with 'vibe' feels inherently unsecure. If you can objectively proof it is not, that's 10 % time well spend.
Which part of "commodity" is confusing???
There are some good points here to improve harnesses around development and deployment though, like a deployment agent should ask if there is an existing S3 bucket instead of assuming it has to set everything up. Deployment these days is unnecessarily complicated in general, IMO.
I have to say its a little sad that so many devs think of security and cryptography in the same way as library frameworks. In that they see it as just some black box API to use for their projects rather than respecting that its a fully developed, complex field that demands expertise to avoid mistakes.
[Disclaimer: that I have read. Doesn't mean there weren't others.]
Too bad it's about NFTs but we can't have everything, can we?