Sorry, this is a bit off-topic, but I have to call this out.
The area absolutely does change, you can see this in the trivial example from the first to second step in https://yagmin.com/blog/content/images/2026/02/blocks_cuttin...
The corners are literally cut away.
What doesn't change is the length of the edges, which is a kind of manhattan distance.
The length of the edge has a limit of the straight line, but does not actually approach the limit.
The area however absolutely does approach the limit, as in fact you remove half the "remaining" area each iteration.
It’s not like it changes our industry’s overall flavour.
How many SaaS apps are excel spreadsheets made production grade?
It’s like every engineer forgets that humans have been building a Tower of Babel for 300000 years. And somehow there is always work to do.
People like vibe coding and will do more of it. Then make money fixing the problems the world will still have when you wake up in the morning.
LEt's see where it goes!
Only one or two of those questions are actually related to programming. (Even though most developers wear multiple hats.) If an organization has the resources to have a six person meeting for adding dark mode, I'd sure hope at least one of them is a designer and knowledgeable on UX. Because most of those questions are ones that they should bring up and have an answer for.
I like the idea that 'code is truth' (as opposed to 'correct'). An AI should be able to use this truth and mutate it according to a specification. If the output of an LLM is incorrect, it is unclear whether the specification is incorrect or if the model itself is incapable (training issue, biases). This is something that 'process engineering' simply cannot solve.
I'm working on a tool that uses structured specs as a single source of truth for automated documentation and code generation. Think the good parts of SpecKit + Beads + Obsidian (it's actually vault compatible) + Backstage, in a reasonably sized typescript codebase that leverages existing tools. The interface is almost finalized, I'm working on polishing up the CLI/squashing bugs and getting good docs ready to do a real launch but if anyone's curious they can poke around the github in the meantime.
One neat trick I'm leveraging to keep the nice human ergonomics of folders + markdown while enforcing structure and type safety is to have a CUE intermediate representation, then serialize to a folder of markdown files, with all object attributes besides name and description being thrown in front matter. It's the same pattern used by Obsidian vaults, you can even open the vaults it creates in Obsidian if you want.
This structure lets you lint your specs, and do code generation via template pattern matching to automatically generate code + tests + docs from your project vault, so you have one source of truth that's very human accessible.
That’s why you should always make your agent write tests first and run them to make sure they fail and aren’t testing that 1+1=2. Then and only then do you let it write the code that makes them pass.
That creates an enormous library of documentation of micro decisions that can be read by the agent later.
Your codebase ends up with executable documentation of how things are meant to work. Agents can do archaeology on it and demystify pretty much anything in the codebase. It makes agents’ amnesia a non-issue.
I find this far more impactful than keeping specs around. It detects regressions and doesn’t drift.
I'm not sure I agree, you don't need to vibe document at all.
What I do in general is: - write two separate business requirements and later implementation markdown files - keep refining the first and second one as the work progresses, stakeholders provide feedback
Before merging I have /docs updated based on requirements and implementation files. New business logic gets included in business docs (what and why), new rules/patterns get merged in architectural/code docs.
Works great and better at every new pr and iteration.
If your code does a shit job of capturing the requirements, no amount of markdown will improve your predicament until the code itself is concise enough to be a spec.
Of course you're free to ignore this advice. Lots of the world's code is spaghetti code. You're free to go that direction and reap the reward. Just don't expect to reach any further than mediocrity before your house of cards comes tumbling down, because it turns out "you don't need strong foundations to build tall things anymore" is just abjectly untrue
As a developer, I would rather just write the code and let AI write the semi-structured data that explains it. Creating reams of flow charts and stories just so an AI can build something properly sounds like hell to me.
- The lowest layers are the most deterministic and the highest layers are the most vibe coded
- tooling and configuration is at the lowest layers and features are at the highest layer
https://en.wikipedia.org/wiki/Nyquist_rate
https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem
Loosely this means that if we're above the Shannon Limit of -1.6 dB (below a 50% error rate), then data can be retransmitted some number of times to reconstruct it by:
number of retransmissions = log(desired confidence)/log(odds of failure)
Where confidence for n sigma, using the cumulative distribution function phi is: confidence = 1 - phi(sigma)
So for example, if we want to achieve the gold standard 5 sigma confidence level of physics for a discovery (an uncertainty of 2.87x10^-7), and we have a channel that's n% noisy, here is a small table showing the number of resends needed: Error rate Number of resends
0.1% 3
1% 4
10% 7
25% 11
49% ~650
In practice, the bit error rate for most communication channels today is below 0.1% (dialup is 10^-6 to 10^-4, ethernet is around 10^-12 to 10^-10). Meaning that to send 512 or 1500 byte packets for dialup and ethernet respectively results in a cumulative resend rate of around 4% (dialup) and 0.1% (ethernet).Just so we have it, the maximum transmission unit (MTU), which is the 512 or 1500 bytes above, can be calculated by:
MTU in bits = (desired packet loss rate)/(bit error rate)
So (4%)/(10^-5) = 4000 bits = 500 bytes for dialup and (0.0000001)/(10^-11) = 10000 bits = 1250 bytes for ethernet. 512 and 1500 are close enough in practice, although ethernet has jumbo frames now since its error rate has remained low despite bandwidth increases.
So even if AI makes a mistake 10-25% of the time, we only have to re-run it about 10 times (or run 10 individually trained models once) to reach a 5 sigma confidence level.
In other words, it's the lower error rate achieved by LLMs in the last year or two that has provided enough confidence to scale their problem solving ability to any number of steps. That's why it feels like they can solve any problem, whereas before that they would often answer with nonsense or give up. It's a little like how the high signal to noise ratio of transistors made computers possible.
Since GPU computing power vs price still doubles every 2 years, we only have to wait about 7 years for AI to basically get the answer right every time, given the context available to it.
For these reasons, I disagree with the premise of the article that AI may never provide enough certainty to provide engineering safety, but I appreciate and have experienced the sentiment. This is why I estimate that the Singularity may arrive within 7 years, but certainly within 14 to 21 years at that rate of confidence level increase.