The real problem arises when non-technical people use an LLM to generate a full project from scratch. The code may work, but it’s often unmaintainable. These people sometimes believe they’re geniuses and view software engineers as blockers, dismissing their concerns as mere technical “mumbo jumbo.”
I don't think this is entirely true. In a lot of cases vibe coding something can be a good way to prototype something and see how users respond. Obviously don't do it for something where security is a concern, but that vibe-coded skin cancer recognition quiz that was on the front page the other day is a good example.
> AI now lets anyone write software, but it has limits. People will call upon software practitioners to fix their AI-generated code.
https://www.slater.dev/about-that-gig-fixing-vibe-code-slop/
The first type of merge request is one that should be generated by an LLM and the second is one that should be generated by a human.
Instead I get neither but I get efficiency so someone can deliver at the last minute. And so I can can go mop up the work later or my job is hell the next time “we just need to get this out the door”.
THANK YOU LLMS
But unlike that six year gap during the tech nuclear winter (2000-2006) when you could literally follow those over-confident $10/hr kids around cleaning up one botched effort to port custom Windows apps to LAMP after another, this time it will be different. The LLMs are trained largely on the European-dominated code bases on Github and it's just enough to keep the "vibe coders" out of real bad trouble (like porting a financial application from Visual BASIC into PHP which has different precision floating point resolution between distributions/releases or de-normalizing structured customer data and storing it in KV pairs "because everybody is doing it so relational databases must be obsolete".) The work to cleanup their "vibe coded" mess will not be as intense (especially considering LLMs will help), but there will be a lot more of it this time around and re-hosting it more economically will be a Thing.
Sadly, American businesses will discover they don't need trillion parameter LLMs (due to MoE, quantization, agentic mini-models, etc.) and the supply of acceptable vector processing chips will catch up to demand (bringing prices down for "on prem" deployments) and that "AI snake oil factor" (non-deterministic behavior and hallucinations) will become more than a concern expressed over weekend C-suite golf games and yacht excursions (you know, where someone always gets fired to set an example of what happens when you don't make your numbers). AI had been dead so long that the top C-suites can't even remember the details of how/why it died anymore (hint: you could get fired for even saying "AI" up until the 2000 Crash giving rise to the synonym "ML" as a more laser focused application of AI), just that they don't trust it. The astonishing demonstrations at OpenAI, Anthropic, xAI, Google and Meta are enough to cause C-suites to write a few checks, causing a couple of ramps in the stock market, but those projects by and large are NOT working out due to the same 'ole same 'ole and I fear this entire paradigm will suffer the same fate as IBM Watson. The stock market may well crash again because of this horsepucky even though there IS true potential with this technology, just as with Web 1.0. (All it needs for that is a catalyst event --maybe not Bill Gates throwing a chair, maybe something in the dispute between Sammy and Elon.) Same as it ever was.
Hallucinations
Context limits
Lack of test coverage and testing-based workflow
Lack of actual docs
Lack of a spec
Great README; cool emoji