We've got a QA agent that needs to run through, say, 200 markdown files of requirements in a browser session. Its a cool system that has really helped improve our team's efficiency. For the longest time we tried everything to get a prompt like the following working: "Look in this directory at the requirements files. For each requirement file, create a todo list item to determine if the application meets the requirements outlined in that file". In other words: Letting the model manage the high level control flow.
This started breaking down after ~30 files. Sometimes it would miss a file. Sometimes it would triple-test a bundle of files and take 10 minutes instead of 3. An error in one file would convince it it needs to re-test four previous files, for no reason. It was very frustrating. We quickly discovered during testing that there was no consistency to its (Opus 4.6 and GPT 5.4 IIRC) ability to actually orchestrate the workflow. Sometimes it would work, sometimes it wouldn't. I've also tested it once or twice against Opus 4.7 and GPT 5.5; not as extensively; but seems to have the same problems.
We ended up creating a super basic deterministic harness around the model. For each test case, trigger the model to test that test case, store results in an array, write results to file. This has made the system a billion times more reliable. But, its also made the agent impossible to run on any managed agent platform (Cursor Cloud Agents, Anthropic, etc) because they're all so gigapilled on "the agent has to run everything" that they can't see how valuable these systems can be if you just add a wee bit of determinism to them at the right place.
The models are already good enough for code generation. What we need is the harness around them actually deterministically enforcing a specific path and “leashing” the models output to be aligned with the intention of the user as much as possible. You can’t make the output of the model deterministic, but you can make everything around it to be so.
Trying to make enforcements work with prompts is like a government agency investigating/auditing itself, there’s no incentive to find problems, so you’ll always inevitably get the “All Good, Boss!”
However, there are some things that I think need a foundational next-generation improvement of some sort. The way that LLMs sort of smudge away "NEVER DO X" and can even after a lot of work end up seeing that as a bit of a "PLEASE DO X" seems fundamental to how they work. It can be easy to lose track of as we are still in the initial flush of figuring out what they can do (despite all we've already found), but LLMs are not everything we're looking for out of AI.
There should be some sort of architecture that can take a "NEVER DO X" and treat it as a human would. There should be some sort of architecture that instead of having a "context window" has memory hierarchies something like we do, where if two people have sufficiently extended conversations with what was initially the same AI, the resulting two AIs are different not just in their context windows but have actually become two individuals.
I of course have no more idea what this looks like than anyone else. But I don't see any reason to think LLMs are the last word in AI.
The reason why "DO NOT SKIP" fails is because your agent is responsible for too many things and there's things in context that are taking away the attention from this guidance.
But nobody said the agent that does enforcement must be the same agent that builds. While you can likely encode some smart decision making logic in your deterministic control flow, you either make it too rigid to work well, or you'll make it so complex that at that point, you might as well just use the agent, it will be cheaper to setup and maintain.
You essentially need 3 base agents:
- Supervisor that manages the loop and kicks right things into gear if things break down
- Orchestrator that delegates things to appropriate agents and enforces guardrails where appropriate
- Workers that execute units of work. These may take many shapes.
hmmmmmm maybe i could vibecode a harness based on that pi thing i've heard about, and integrate it closer with jj instead of relying on llms knowing how to use it, and make certain stages guaranteed to run... oh dear
edit: also i can't bring myself to believe the 'ultimate' form or whatever stabilizes out will be chat-based interfaces for coding and code generation
i think it's just that openai happened to strike gold with ChatGPT and nobody has time to figure anything else out because they've got to get the bazillion investor dollars with something that happens to kinda work
also afaiu all these instruct models are based on 'base' models that 'just' do text prediction, without replying with a chat format; will we see code generation models that output just code without the chat stuff?
This is essentially declarative programming. Most traditional programming is imperative, what most developers are used to - I give the exact set of instructions and expect them to be obeyed as I write them. Agents are way more declarative than imperative - you give them a result, they work on getting that result. Now the problem of course, is in something declarative like say, SQL, this result is going to be pretty consistent and well-defined, but you're still trusting the underlying engine on how to go about it.
Thinking about agents declaratively has helped me a lot rather than to try to design these rube-goldberg "control" systems around them. Didn't get it right? Ok, I validated it's not correct, let's try again or approach it differently.
If you really need something imperative, then write something imperative! Or have the agent do so. This stuff reads like trying to use the wrong tool for the job.
I'm a firm believer that a "thin harness" is the wrong approach for this reason and that workflows should be enforced in code. Doing that allows you to make sure that the workflow is always followed and reduces tokens since the LLM no longer has to consider the workflow or read the workflow instructions. But it also allows more interesting things: you can split plans into steps and feed them through a workflow one by one (so the model no longer needs to have as strong multi-step following); you can give each workflow stage its own context or prompts; you can add workflow-stage-specific verification.
Based on my experience with Claude Code and Kilo Code, I've been building a workflow engine for this exact purpose: it lets you define sequences, branches, and loops in a configuration file that it then steps through. I've opted to passing JSON data between stages and using the `jq` language for logic and data extraction. The engine itself is written in (hand coded; the recent Claude Code bugs taught me that the core has to be solid) Rust, while the actual LLM calls are done in a subprocess (currently I have my own Typescript+Vercel AI SDK based harness, but the plan is to support third party ones like claude code cli, codex cli, etc too in order to be able to use their subscriptions).
I'm not quite ready to share it just yet, but I thought it was interesting to mention since it aims to solve the exact problem that OP is talking about.
Slash commands, for instance, are a misfeature. I should never have to wait for the chatbot finish a turn so that I can check on the status of my context window or how much money I've spent this session. Control should be orthogonal to the chat loop.
Even things that have nothing to do with controlling the text generator's input and output are entangled with chat actions for no good reason except "it's a chat thing, let's pretend we're operating an IRC bot".
There are a zillion LLM agents out there nowadays, but none of them really separate control from the agent loop from presentation well. (A few do at least have headless modes, which is cool.)
Here's a pretty specific example of what I mean, but maybe food for thought:
Podcast (20 minute digest): https://pub-6333550e348d4a5abe6f40ae47d2925c.r2.dev/EP008.ht...
0 - https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-...
The second it works, bake the workflow into the harness. Yesterday I was doing just that, and the whole agent loop disappeared because the process could've been condensed into a one-shot request (+1 MorphLLM fast apply) from careful context construction. (It was an Autoresearcher)
My first thought was, well agents seem nice, but I think, AI workflows are a better bet. However, I don't really understood AI or agents in depth and felt like I was just "doing things the old way" and removing flexibility from agents was a ridiculous idea.
After some research I got the impression that I was right. A well defined workflow and scope is just what's needed for AI. It's cheaper and more consistent. It probably even makes the whole thing run well with non-SOTA models.
https://github.com/yieldthought/flow
Happily, 5.5 is good at writing and using it.
This is the only way to guarantee AI usage doesn't burn you. Any automation beyond this is just theater, no matter how much that hurts to hear/undermines your business model.
A bird sings, a duck quacks. You don't expect the duck to start singing now, do you?
The alternative is running your ten lines of Python in the most expensive, slowest, least reliable way possible. (Sure is popular though)
For example, most people were using the agents for internet research. It would spin for hours, get distracted or forget what it was supposed to be doing.
Meanwhile `import duckduckgo` and `import llm` and you can write ten lines that does the same thing in 20 seconds, actually runs deterministically, and costs 50x less.
The current models are much better -- good enough that the Auto-GPT is real now! -- but running poorly specified control flow in the most expensive way possible is still a bad idea.
I created it to address this exact issue. It is a vendor-neutral ESLint-style policy engine and currently supports Claude Code, Codex, and Copilot.
It uses the agents hooks payloads and session history to enforce the policies. Allowing it to be setup to block commits if a file has been modified since the checks were last run, disallow content or commands using string or regex matching, and enforce TDD without the need of any extra reporter setup and it works with any language.
Feedback welcome: https://github.com/nizos/probity
Swamp teaches your Agent to build and execute repeatable workflows, makes all the data they produce searchable, and enables your team to collaborate.
We also build swamp and swamp club using swamp. You can see that process in the lab[2]. This combines all of the creativity of the LLM for the parts that matter, while providing deterministic outcomes for the parts you need to be deterministic.
I started working on it piece by piece about 14 years ago. It was originally targeted at junior developers to provide them the necessary security and scalability guardrails whilst trying to maintain as much flexibility as possible. It's very flexible; most of Saasufy is itself is built using Saasufy. Only the actual user service and orchestration is custom backend code.
Also, I designed it in a way that it would help the user fast-track their learning of important concepts like authentication, access control, schema validation.
It turns out that all of these things that junior devs need are exactly what LLMs need as well.
I tested it with Claude Code originally and got consistently great results. More recently, I tested with https://pi.dev with GPT 5.5 and it seemed to be on par.
Even skills are not a catch-all, because besides the supply chain risk from using skills you pull from someone else, a lot of tasks require an assortment of skills.
I've accommodated this with my agent team (mostly sonnets fwiw) by developing what we call "operational reflexes". Basically common tasks that require multiple domains of expertise are given a lockfile defining which of the skills are most relevant (even which fragment of a skill) and how in-depth / verbose each element needs to be to accomplish the same task the same way, with minimal hallucinations or external sources.
A coordinator agent assigns the tasks and selects the relevant lockfile and sends it along or passes it along to another agent with a different specified lockfile geared towards reviewing.
It's a bit, but this workflow dramatically increased the quality of output for technical work I get from my agents and I don't really need to write many prompts myself like this.
I am finding that the better the quality gates are the lower quality llm you can use for the same result (at a cost of time).
Markdown files are a good reference but they are a weak enforcement tool and go stale easily.
Avoid burying yourself in more skills docs you’re not even writing yourself and probably never even read. Focus that toward deterministic tooling. (Not that skills or prompts are bad, I agree a meta skill that tells an agent what subagents and what order to run is useful)
We need to define agents in code, and drive them through semi-deterministic workflows. Kick subtasks off to agents where appropriate, but do things like gather context and deal with agent output deterministically.
This is a massive boost in accuracy, cost efficiency, AND speed. Stop using tokens to do the deterministic parts of the task!
It's externally orchestrated and managed, not by an agent running the the loop.
The goal is to force LLMs to produce exactly what you want every time.
I will be open sourcing soon. You can use whatever harness or tools you already use, you just delegate the actual implementation to the engine.
My personal opinion is that AI and agents are being misrepresented… The amount of setup, guidance and testing that’s required to create smarter version of a form is insane.
At the moment my small test is: Compressed instructions (to fit within the 8k limit) 9 different types of policies to guide the agent (json) 3 actual documents outlining domain knowledge (json) 8 Topics (hint harvesting, guide rails, and the pieces of information prepared as adaptive cards for the user) 3 Tools (to allow for connectors)
The whole thing is as robust as I can make it but it still feels like a house of cards and I expect some random hiccup will cause a failure.
Both designs (Lightroom, game engines) have worked successfully.
There's probably nothing that prevents mixing both approaches in the same "app".
Still have yet to see a universal treatment that tackles this well.
Especially all bookkeeping logic should move into the symbolic layer: https://zby.github.io/commonplace/notes/scheduler-llm-separa...
I've tried doing something similar with AI by running a prompt several times and then have an agent pick the best response. It works fairly well but it burns a lot of tokens.
"One thing that I have seen in the wild quite a bit is taking the agent pattern and sprinkling it into a broader more deterministic DAG." - https://github.com/humanlayer/12-factor-agents/blob/main/REA...
Agents aren't reliable; use workflows instead.
I feel hooks are integral part of your code harness, that’s only deterministic way to control coding agents.
Can't wait for ya'll to come full circle and invent programming from first principles.
using this is going to do the opposite of what you want
Phase 1: only test files may be altered, exactly one new test failure must appear.
Phase 2: only code files may be altered. The phase is cleared when the test now succeeds and no other tests fail.
If you get stuck, bail and ask for guidance
It feels like we are still discovering the optimal operating range on a spectrum between these two domains. Perhaps the optimal range will depend on the specific field in question.
In the real world almost nothing runs like that - only software and even that has a lot of failures.
So perhaps rather than trying to make agents run deterministicly the goal is to assume some failure rate and find compensation control around it.
I decided to build my agentic environment differently. Local only, sandboxed, enforced with Go specific requirement definitions that different agent roles cannot break as a contract.
That alone is far better than any hyped markdown-storage-sold-as-memory project I've seen in the last weeks.
Currently I am experimenting with skills tailored to other languages, because agentskills actually are kinda useless because they're not enforced nor can any of their metadata be used to predictably verify their behaviors.
My recommendation to others is: Treat LLM output as malware. Analyse its behavior, not its code. Never let LLMs work outside your sandbox. Force them to not being able to escape sandboxes. And that includes removing the Bash tool, for example, because that's not a reproducible sandbox.
Also, choose a language that comes with a strong unit testing methodology. I chose Go because it allows me to write unit tests for my tools, and even agents to agents communication down the line (with some limitations due to TestMain, but at least it's possible).
If you write your agent environment or harness in Typescript, you already failed before you started. Compiled code isn't typesafe because the compiler doesn't generate type checks in the resulting JS code.
Anyways, my two cents from the purpleteaming perspective that tries to make LLMs as deterministic as possible.
1. an adversarial agent harness that uses one agent to create a plan and implement it, and another to review the plan and code-review each step.
2. an agentic validation suite -- a more flexible take on e2e testing.
3. some custom skills that explain how to use both of those flows.
With this in place you can formulate ideas in a chat session, produce planning artifacts, then use the adversarial system to implement the plans and the validation layer to get everything working e2e for human review.
There are a lot of tools you can use for these things but I chose to just build the tooling in the repo as I go.
There's this guy at work who is kind of precious about Claude Code. When Hegseth banned Anthropic, this guy freaked out. He spent many pages ranting about how terrible Gemini and Codex are and basically nuked his project. He insisted only Claude could do his project.
Meanwhile, I managed to redo his work with GPT 4o in a weekend. No AI generated code anywhere, just being capable of writing a for-loop over a directory of files my own self. The AI part is only really necessary because folks can't be bothered to author documents with proper hierarchies.
People talk about "AI is going to eliminate boilerplate and accelerate development and we'll do new jobs that were too costly before". Yet this guy spent weeks coaxing Claude to do something that took me a few hours because "boilerplate" is really not that big of a deal. If this is the kind of job we're going to be able to do because the value-to-effort ratio was less than 1, it kind of indicates to me that there isn't a lot of value to gain at any level of effort. Yeah, it's not really worth your time to bend over and pick up a penny, but even if I had a magical penny snagging magnet, I'm still going to ignore the pennies because that's just how valueless pennies are.
If AI lets me never have to open a PowerPoint from a client to read the chart values from the piechart they screenshot and pasted into PowerPoint, that's wonderful. What more would I ever need? The rest of the work just isn't that hard. But if you think AI is going to replace people like me because it can do "boilerplate", the AI is not anywhere near as fast or cheap at getting to a reliable, consistent, repeatable process as a human for that.
You might use an LLM api call here as a translation or summary step in a deterministic workflow, but they are not acting as agents, because they lack _agency_.
The value of using an agent harness is precisely that they are _not deterministic_. You provide agents a goal, tools and constraints and they do the task they were asked to perform as best as they can figure out how to do it. You may provide them deterministic workflows as tools they can call, but those workflows, outside of the agent harness itself, should not constrain what the agent does. You are paying a lot of money for agent reasoning, not to act as an expensive data transformation pipeline.
It may be the case that a lot of agentic workflows are more properly done with fully deterministic workflows, but the goal there should be to _remove the agents entirely_ and spend those tokens on non deterministic tasks that require agentic decision making.
I do think there are fundamental limits to what agents are capable of doing unsupervised and there does need to be a lot more human guidance, observability and control over what they are doing, but that's sort of the opposite of embedding them in deterministic workflows, that is more of team integration/communication problem to solve.