Writing _all_ (waves hands around various llm wrapper git repos) these frameworks and harnesses, built on top of ever changing models sure doesn't feel sensible.
I don't know what the best way of using these things is, but from my personal experience, the defaults get me a looong way. Letting these things churn away overnight, burning money in the process, with no human oversight seems like something we'll collectively look back at in a few years and laugh about, like using PHP!
I also spend most of my time reviewing the spec to make sure the design is right. Once I'm done, the coding agent can take 10 minutes or 30 minutes. I'm not really in that much of a rush.
The trick is just not mixing/sharing the context. Different instances of the same model do not recognize each other to be more compliant.
https://benhouston3d.com/blog/the-rise-of-test-theater
You have to actively work against it.
1. one agent writes/updates code from the spec
2. one agent writes/updates tests from identified edge cases in the spec.
3. a QA agent runs the tests against the code. When a test fails, it examines the code and the test (the only agent that can see both) to determine blame, then gives feedback to the code and/or test writing agent on what it perceives the problem as so they can update their code.
(repeat 1 and/or 2 then 3 until all tests pass)
Since the code can never fix itself to directly pass the test and the test can never fix itself to accept the behavior of the code, you have some independence. The failure case is that the tests simply never pass, not that the test writer and code writer agents both have the same incorrect understanding of the spec (which is very improbable, like something that will happen before the heat death of the universe improbable, it is much more likely the spec isn't well grounded/ambiguous/contradictory or that the problem is too big for the LLM to handle and so the tests simply never wind up passing).
Something I'm starting to struggle with is when agents can now do longer and more complex tasks, how do you review all the code?
Last week I did about 4 weeks of work over 2 days first with long running agents working against plans and checklists, then smaller task clean ups, bugfixes and refactors. But all this code needs to be reviewed by myself and members from my team. How do we do this properly? It's like 20k of line changes over 30-40 commits. There's no proper solution to this problem yet.
One solution is to start from scratch again, using this branch as a reference, to reimplement in smaller PRs. I'm not sure this would actually save time overall though.
What if instead, the goal of using agents was to increase quality while retaining velocity, rather than the current goal of increasing velocity while (trying to) retain quality? How can we make that world come to be? Because TBH that's the only agentic-oriented future that seems unlikely to end in disaster.
Then, what comes next feels less like a new software practice and more like a new religion, where trust has to replaces understanding, and the code is no longer ours to question.
The overnight thing is real but overhyped. What actually works is giving agents very narrow tasks with clear success criteria. "Research top 10 Reddit threads about X and summarize pain points" works great. "Build me a feature" overnight is a coin flip.
Biggest lesson: the bottleneck moved from execution to context management. Getting agents to remember what matters and forget what doesn't is harder than the actual task delegation.
It's currently burning through the TESTING.md backlog: https://github.com/alpeware/datachannel-clj
When I graduated in 2012 it was pushed everywhere, including my uni so my undergrad thesis was done in Java.
Everyone was learning it, certifying, building things on top of other things.
EJB, JPA, JTA, JNDI, JMS and JCA.
And them more things to make it even more powerful with Servlets, JSP, JSTL, JSF.
Many companies invested and built various application servers, used by enterprises by this day.
Every engineer I've met said Java is server side future, don't bother with other tech. You'll just draw data schema, persistence mapping, business logic and ship it.
I switched to C++ after Bjarne's talk I attended in 2013. I'm glad I did although I never worked as a software engineer. Following passion and going deep into technology was a bliss for me, the difference between my undergrad Java, Master C++ and Rust PhD is like a kids toy and a real turboprop engine.
Don't follow the hype - it will go away and you'll be left with what you've invested into.
I can't understand the mindset that would lead someone not to have realized this from the beginning.
But review fatigue and resulting apathy is real. Devs should instead be informed if incorrect code for whatever feature or process they are working on would be high-risk to the business. Lower-risk processes can be LLM-reviewed and merged. Higher risk must be human-reviewed.
If the business you're supporting can't tolerate much incorrectness (at least until discovered), than guess what - you aren't going to get much speed increase from LLMs. I've written about and given conference talks on this over the past year. Teams can improve this problem at the requirements level: https://tonyalicea.dev/blog/entropy-tolerance-ai/
TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice. It’s “red green refactor repeat”, and each step is only a handful of lines of code.
TDD is not “write the tests, then write the code.” It’s “write the tests while writing the code, using the tests to help guide the process.”
Thank you for coming to my TED^H^H^H TDD talk.
Not a rhetoric question. Trillion token burners and such.
Even better though - external test suits. Recently made a S3 server of which the LLM made quick work for MVP. Then I found a Ceph S3 test suite that I could run against it and oh boy. Ended up working really good as TDD though.
One example I have been experimenting is using Learning Tests[1]. The idea is that when something new is introduced in the system the Agent must execute a high value test to teach itself how to use this piece of code. Because these should be high leverage i.e. they can really help any one understand the code base better, they should be exceptionally well chosen for AIs to use to iterate. But again this is just the expert-human judgement complexity shifted to identifying these for AI to learn from. In code bases that code Millions of LoC in new features in days, this would require careful work by the human.
[1] https://anthonysciamanna.com/2019/08/22/the-continuous-value...
If an agent runs unattended for hours, small errors compound quickly. Even simple misunderstandings about file structure or instructions can derail the whole process.
#!python
print(“fix needed: method ABC needs a return type annotation on line 45”
import os
os.exit(2)
Claude Code will show that output to the model. This lets you enforce anything from TDD to a ban on window.alert() in code - deterministically.
This can be the basis for much more predictable enforcement of rules and standards in your codebase.
Once you get used to code based guardrails, you’ll see how silly the current state of the art is: why do we pack the context full of instructions, distract the model from its task, then act all surprised when it doesn’t follow them perfectly!
The cost concern is real but manageable. The key is routing models by task. Complex reasoning gets Opus, routine work gets Sonnet, mechanical tasks get Haiku. Not everything needs the expensive model.
The quality concern is the bigger one. What people miss about autonomous agents is that "running unsupervised" doesn't mean "running without guardrails." Each of my agents has explicit escalation rules, a security agent that audits the others, and a daily health report system that catches failures. The agents that work best are the ones with built-in disagreement, not the ones that just pass things through.
Wrote up the full architecture here if anyone's curious about the multi-agent coordination patterns: https://clelp.com/blog/how-we-built-8-agent-ai-team
I've been playing around with agent orchestration recently and at least tried to make useful outputs. The biggest differences were having pipelines talk to each other and making most of the work deterministic scripts instead of more LLM calls (funnily enough).
Made a post about it here in case anyone is interested about the technicals: https://www.frequency.sh/blog/introducing-frequency/
This resonates with my experience, and it is also a refreshing honest take: pushing back on heavy upfront process isn't laziness, it's just the natural engineers drive to build things and feel productive.
Edit: I even have a skill called release-test that does manual QA for every bug we've ever had reported. It takes about 10 hours to run but I execute it inside a VM overnight so I don't care.
You can have Gemini write the tests and Claude write the code. And have Gemini do review of Claude's implementation as well. I routinely have ChatGPT, Claude and Gemini review each other's code. And having AI write unit tests has not been a problem in my experience.
To everyone who plan on automating themselves out of a job by taking the human element out- this is the endgame that management wants: replacing your (expensive and non-tax-optimized) labor with scalable Opex.
Seems like QA is the new prompt engineering
Since you have to test that manually anyway, you can have AI write the code first; you test it; if it's the right result, you tell AI this is correct, so write test cases for this result.
Honestly, sometimes the harnesses, specs, some predefined structure for skills etc all feel over-engineering. 99% of the time a bloody prompt will do. Claude Code is capable of planning, spawning sub-agents, writing tests and so on.
Claude.md file with general guidelines about our repo has worked extraordinarily good, without any external wrappers, harnesses or special prompts. Even the MD file has no specific structure, just instructions or notes in English.
What he describes is like that. Just that the plan step is suggesting docs, not writing actual docs.
> Changes land in branches I haven't read. A few weeks ago I realized I had no reliable way to know if any of it was correct: whether it actually does what I said it should do.
> I care about this. I don't want to push slop
They clearly didn't care about that. They only cared about non stop lines of code generation and shipping anything fast. Otherwise they wouldn't need weeks to realise that they weren't reading or testing this code - it's obvious from the outset.
Maybe their approach to this changed and that's fine, but at the beginning they very much did not care and I feel people only keep saying that do because otherwise they'd need to be the one to admit the emperor isn't wearing clothes.
The architecture we landed on: ingest goes through a certainty scoring layer before storage. Contradictions get flagged rather than silently stacked. Memories that get recalled frequently get promoted; stale ones fade.
It's early but the difference in agent coherence over long sessions is noticeable. Happy to share more if anyone's going down this path.
the part that doesn't get talked about enough: most people are hitting a single provider API and treating it as fixed cost. but inference pricing varies a lot across providers for the same model. we've seen 3-5x spreads for equivalent quality on commodity models.
so half the cost problem is architectural (don't let agents spin unboundedly) and the other half is just... shopping around. not glamorous but real.
- privacy policy links to marketing company `beehiiv.com`. the blog author doesn't show up there.
- the profile picture url is `.../Generated_Image_March_03__2026_-_1_55PM.jpg.jpeg`
i didn't dig or read further.
That’s really putting the cart before the horse. How do you get to “merging 50 PRs a week” before thinking “wait, does this do the right thing?”
I want to subscribe, but I never end up reading newsletters if they land in my email inbox.
I've been building OctopusGarden (https://github.com/foundatron/octopusgarden), which is basically a dark software factory for autonomous code generation and validation. A lot of the techniques were inspired by StrongDM's production software factory (https://factory.strongdm.ai/). The autoissue.py script (https://github.com/foundatron/octopusgarden/blob/main/script...) does something really close to what others in this thread are describing with information barriers. It's a 6-phase pipeline (plan, review plan, implement, cold code review, fix findings, CI retry) where each phase only gets the context it actually needs. The code review phase sees only the diff. Not the issue, not the plan. Just the diff. That's not a prompt instruction, it's how the pipeline is wired. Complexity ratings from the review drive model selection too, so simple stuff stays on Sonnet and complex tasks get bumped to Opus.
On the test freezing discussion, OctopusGarden takes a different approach. Instead of locking test files, the system treats hand-written scenarios as a holdout set that the generating agent literally never sees. And rather than binary pass/fail (which is totally gameable, the specification gaming point elsewhere in this thread is spot on), an LLM judge scores satisfaction probabilistically, 0-100 per scenario step. The whole thing runs in an iterative loop: generate, build in Docker, execute, score, refine. When scores plateau there's a wonder/reflect recovery mechanism that diagnoses what's stuck and tries to break out of it.
The point about reviewing 20k lines of generated code is real. I don't have a perfect answer either, but the pipeline does diff truncation (caps at 100KB, picks the 10 largest changed files, truncates to 3k lines) and CI failures get up to 4 automated retry attempts that analyze the actual failure logs. At least overnight runs don't just accumulate broken PRs silently.
Also want to shout out Ouroboros (https://github.com/Q00/ouroboros), which comes at the problem from the opposite direction. Instead of better verification after generation, it uses Socratic questioning to score specification ambiguity before any code gets written. It literally won't let you proceed until ambiguity drops below a threshold. The core idea ("AI can build anything, the hard part is knowing what to build") pairs well with the verification-focused approaches everyone's discussing here. Spec refinement upstream, holdout validation downstream.
People are so enamored with how fast the 20% part is now and yes it’s amazing. But the 80% part by time (designing, testing, reviewing, refactoring, repairing) still exists if you want coherent systems of non-trivial complexity.
All the old rules still apply.
If you don't review the result, who is going to want to use or even pay for this slop?
Reviewing is the new bottleneck. If you cannot review any more code, stop producing new code.
Good luck doing that in any company that does something meaningful. I can't believe anybody can seriously be ok with such a workflow, except maybe for your little pet project at home.
Telling Claude to turn your notes into a blog post with simple, terse language does not hide your own lack of taste.
I have been asking these tools to build other types of projects where it (seems?) much more difficult to verify without a human-in-the-loop. One example is I had asked Codex to build a simulation of the solar system using a Metal renderer. It produced a fun working app quickly.
I asked it to add bloom. It looped for hours, failing. I would have to manually verify — because even from images — it couldn't tell what was right and wrong. It only got it right when I pasted a how-to-write-a-bloom-shader-pass-in-Metal blog post into it.
Then I noticed that all of the planet textures were rotating oddly every time I orbited the camera. Codex got stuck in another endless loop of "Oh, the lookAt matrix is in column major, let me fix that <proceeds to break everything>." or focusing (incorrectly) on UV coordinates and shader code. Eventually Codex told me what I was seeing "was expected" and that I just "felt like it was wrong."
When I finally realised the problem was that Codex had drawn the planets with back-facing polygons only, I reported the error, to which Codex replied, "Good hypothesis, but no"
I insisted that it change the culling configuration and then it worked fine.
These tools are fun, and great time savers (at times), but take them out of their comfort zone and it becomes real hard to steer them without domain knowledge and close human review.
These are fundamentals of CS that we are forgetting as we dismantle all truth and keep rocketing forward into LLM psychosis.
> I care about this. I don't want to push slop, and I had no real answer.
The answer is to write and understand code. You can't not want to push slop, and also want to just use LLMs.
Don't get me wrong, I use agentic coding often, when I feel it's going to type it faster than me (e.g. a lot of scaffolding and filler code).
Otherwise, what's the point?
I feel the whole industry is having its "Look ma! no hands!" moment.
Time to mature up, and stop acting like sailing is going where the seas take you.
Code Review: https://news.ycombinator.com/item?id=47313787
If you don’t trust the agent to do it right in the first place why do you trust them to implement your tests properly? Nothing but turtles here.
Whenever I coded any serious solution as a technical co-founder, every single day there was a major new debate about the product direction. Though we made massive 'progress' and built out a whole new universe in software, we haven't yet managed to find product market fit. It's like constant tension. If the intelligence of two relatively intelligent humans with a ton of experience and complimentary expertise isn't enough to find product-market-fit after one year, this gives you an idea about how high the bar is for an AI agent.
It's like the problem was that neither me nor my domain expert co-founder who had been in his industry for over 15 years had a sufficiently accurate worldview about the industry or human psychology to be able to produce a financially viable solution. Technically, it works perfectly but it just doesn't solve anyone's problem.
So just imagine how insanely smart AI has to be to compete in the current market.
Maybe you could have 100 agents building and promoting 100 random apps per day... But my feeling is that you're going to end up spending more money on tokens and domain names then you will earn in profits. Maybe deploy them all under the same domain with different subdomains? Not great for SEO... Also, the market for all these basic low-end apps is going to be extremely competitive.
IMO, the best chance to win will be on medium and complex systems and IMO, these will need some kind of human input.
1. Write tons of documentation first. I.e. NASA style, every singe known piece of information that is important to implementation. As it's a rewrite of legacy project, I know pretty much everything I need, so there is very little ideas validation/discovery in the loop for that stage. Documentation is structured in nested folders and multiple small .md files, because its amount already larger than Claude Code context (still fits into Gemini). Some of the core design documents are included into AGENTS.md(with symlink to GEMINI/CLAUDE mds)
For that particular project I spent around 1.5 months writing those docs. I used Claude to help with docs, especially based on the existing code base, but the docs are read and validated by humans, as a single source of truth. For every document I was also throwing Gemini and Codex onto it for analyzing for weaknesses or flaws (that worked great, btw).
2. TDD at it's extreme version. With unit tests, integration tests, e2e, visual testing in Maestro, etc. The whole implementation process is split in multiple modules and phases, but each phase starts with writing tests first. Again, as soon as test plan ready, I also throw it on Gemini and Codex to find flaws, missed edge cases, etc. After implementing tests, one more time - give it to Gemini/Codes to analyze and critique.
3. Actual coding. This part is the fastest now especially with docs and tests in place, but it's still crucial to split work into manageable phases/chunks, and validate every phase manually, and ocassionaly make some rounds of Gemini/Codex independently verifying if the code matches docs and doesn't contain flaws/extra duplication/etc.
I never let Claude to commit to git. I review changes quickly, checking if the structure of code makes sense, skimming over most important files to see if it looks good to me (i.e. no major bullshit, which, frankly, has never happened yet) and commit everything myself. Again, trying to make those phases small enough so my quick skim-review still meaningful.
If my manual inspection/test after each phase show something missing/deviating, first thing I ask is "check if that is in our documentation". And then repeat the loop - update docs, update/add tests, implement.
The project is still in progress, but so far I'm quite happy with the process and the speed. In a way, I feel that "writing documentation" and "TDD" has always been a good practice, but too expensive given that same time could've been spent on writing actual code. AI writing code flipped that dynamics, so I'm happy to spend more time on actual architecting/debating/making choices, then on finger tapping.
How is this even possible? Am I the only SWE who feels like the easiest part of my job is writing code and this was never the main bottleneck to PR?
Before CC I'd probably spent around 20-30% of my day just writing code into an IE. That's now maybe 10% now. I'd probably also spend 20-30% of my day reading code and investigating issues, which is now maybe 10-15% of my day now using CC to help with investigation and explanations.
But there's a huge part of my day, perhaps the majority it, where I'm just thinking about technical requirements, trying to figure out the right data model & right architecture given those requirements, thinking about the UX, attending meetings, code reviews, QA, etc, etc, etc...
Are these people who are spitting out code literally doing nothing but writing code all day without any thought so now they're seeing 4-5x boosts in output?
For me it's probably made me 50% more efficient in about 40-50% of my work. So I'm probably only like 20-25% more efficient overall. And this assumes that the code I'm getting CC to produce is even comparable to my own, which in my experience it's not without significant effort which just erodes any productivity benefit from the production of code.
If your developers are raising 5x more PRs something is seriously wrong. I suspect that's only possible if they're not thinking through things and just getting CC to decide the requirements, come up with the architecture, decide on implementation details, write the code and test it. Presumably they're also not reviewing PRs, because if they were and there is this many PRs being raised then how does the team have time to spit out code all day using CC?
People who talk about 5x or 10x productivity boosts are either doing something wrong, or just building prototype. As someone who has worked in this industry for 20 years, I literally don't understand how what some people describe can even being happening in functional SWE teams building production software.
I don't think AI will ever solve this problem. It will never be more than a tool in the arsenal. Probably the best tool, but a tool nonetheless.