the contracts argument above sounds nice in theory but in practice most codebases don't have well defined contracts for anything security sensitive. they have implicit assumptions that only make sense if you've been in the code long enough to absorb them. AI doesn't absorb those, it pattern matches around them. and if the humans reviewing the AI output also don't have that context anymore then honestly who's catching these things
If there are well defined contracts for the software, and the software behaves correctly, is it really necessary for understanding the code entirely ? We seem to develop on many abstractions already ignoring how the code actually executes on the hardware without any issues.
Secondly, wouldn't AI help in understanding the codebase and make that easier as well ? Debugging must also immensely benefit from AI assisted tools.
So i'm less concerned overall with the auto-generated code as long as the code thats landing is reviewed by an AI bot that's aggressively prompting to ensure the code is as simple as it could be.
If AI is going to write a large percentage of the code, the highest-leverage thing a developer can do might actually be slowing down and deeply understanding the system (not generating more code faster).
I noticed I was spending more time reconstructing context than actually building: – figuring out what changed – tracing data flow – rebuilding mental models before I could even prompt properly (without breaking other features) - debugging slop with more slop
Better understanding → better prompts, fewer breaking changes, and more real debugging.
Over the weekend I hacked on a small prototype exploring this idea. It visualizes execution flow and system structure to make it easier to reason about unfamiliar or AI-modified codebases.
Not really a polished “product” — more a thinking tool / experiment.
I’m curious whether others are running into the same bottleneck, or if this is just a local maximum I’ve fallen into.