I'm so aligned on your take on context engineering / context management. I found the default linear flow of conversation turns really frustrating and limiting. In fact, I still do. Sometimes you know upfront that the next thing you're to do will flood/poison the nicely crafted context you've built up... other times you realise after the fact. In both cases, you didn't have that many alternatives but to press on... Trees are the answer for sure.
I actually spent most of Dec building something with the same philosphy for my own use (aka me as the agent) when doing research and ideation with LLMs. Frustrated by most of the same limitations - want to build context to a good place then preserve/reuse it over and over, fire off side quests etc, bring back only the good stuff. Be able to traverse the tree forwards and back to understand how I got to a place...
Anyway, you've definitely built the more valuable incarnation of this - great work. I'm glad I peeled back the surface of the moltbot hysteria to learn about Pi.
This is great work, I am looking forward how it evolves in the future. So far Claude Code seems best despite its bugs given the generous subscription, but when the market corrects and the prices will get closer to API prices, then probably the pay-per-token premium with optimized experience will be a better deal than to suffer Claude Code glitches and paper cuts.
The realization is that at the end agent framework kit that is customizable and can be recursively improved by agents is going to be better than a rigid proprietary client app.
Google doesn't even provide a tokenizer to count tokens locally. The results of this stupidity can be seen directly in AI studio which makes an API call to count_tokens every time you type in the prompt box.
At least for Codex, the agent runs commands inside an OS-provided sandbox (Seatbelt on macOS, and other stuff on other platforms). It does not end up "making the agent mostly useless".
- Minimal, configurable context - including system prompts [2]
- Minimal and extensible tools; for example, todo tasks extension [3]
- No built-in MCP support; extensions exist [4]. I'd rather use mcporter [5]
Full control over context is a high-leverage capability. If you're aware of the many limitations of context on performance (in-context retrieval limits [6], context rot [7], contextual drift [8], etc.), you'd truly appreciate Pi lets you fine-tune the WHOLE context for optimal performance.
It's clearly not for everyone, but I can see how powerful it can be.
---
[1] https://lucumr.pocoo.org/2026/1/31/pi/
[2] https://github.com/badlogic/pi-mono/tree/main/packages/codin...
[3] https://github.com/mitsuhiko/agent-stuff/blob/main/pi-extens...
[4] https://github.com/nicobailon/pi-mcp-adapter
[5] https://github.com/steipete/mcporter
[6] https://github.com/gkamradt/LLMTest_NeedleInAHaystack
Reading HN I feel a bit out of touch since I seem to be "stuck" on Cursor. Tried to make the jump further to Claude Code like everyone tells me to, but it just doesn't feel right...
It may be due to the size of my codebase -- I'm 6 months into solo developer bootstrap startup, so there isn't all that much there, and I can iterate very quickly with Cursor. And it's mostly SPA browser click-tested stuff. Comparatively it feels like Claude Code spends an eternity to do something.
(That said Cursor's UI does drive me crazy sometimes. In particular the extra layer of diff-review of AI changes (red/green) which is not integrated into git -- I would have preferred that to instead actively use something integrated in git (Staged vs Unstaged hunks). More important to have a good code review experience than to remember which changes I made vs which changes AI made..)
I hadn't realized that Pi is the agent harness used by OpenClaw.
I only wish the author changed his stance on vendor extensions: https://github.com/badlogic/pi-mono/discussions/254
This makes it even more baffling why anthropic went with bun, a runtime without any sandboxing or security architecture and will rely in apple seatbelt alone?
I've been running OpenClaw (which sits on top of similar primitives) to manage multiple simultaneous workflows - one agent handles customer support tickets, another monitors our deployment pipeline, a third does code reviews. The key insight I hit was exactly what you describe: context engineering is everything.
What makes OpenClaw particularly interesting is the workspace-first model. Each agent has AGENTS.md, TOOLS.md, and a memory/ directory that persists across sessions. You can literally watch agents learn from their mistakes by reading their daily logs. It's less magic, more observable system.
The YOLO-by-default approach is spot on. Security theater in coding agents is pointless - if it can write and execute code, game over. Better to be honest about the threat model.
One pattern I documented at howtoopenclawfordummies.com: running multiple specialized agents beats one generalist. Your sub-agent discussion nails why - full observability + explicit context boundaries. I have agents that spawn other agents via tmux, exactly as you suggest.
The benchmark results are compelling. Would love to see pi and OpenClaw compared head-to-head on Terminal-Bench.
Anyway, more on the actual article what he’s done is really cool and features a lot of stuff that has proven to work at the forefront of automatic programming – he has a massive test suite against all major model providers, he runs his agent against known eval suites as well.
Re: security, I think I need to make an AI credential broker/system. The only way to securely use agents is to never give them access to a credential at all. So the only way to have the agent run a command which requires credentials, is to send the command to a segregated process which asks the user for permission, then runs it, then returns status to the agent. It would process read-only requests automatically but write requests would send a request to the user to authorize. I haven't yet found somebody else writing this, so I might as well give it a shot
Other than credentialed calls, I have Docker-in-Docker in a VM, so all other actions will be YOLO'd. I think this is the only reasonable system for long-running loops.
I’m on a $100/mo plan, but the codex bar makes it look like I’m burning closer to $500 every 30 days. I tried going local with Qwen 3 (coding) on a Blackwell Pro 6000, and it still feels a beat behind, either laggy, or just not quite good enough for me to fully relinquish Claude Code.
Curious what other folks are seeing: any success stories with other agents on local models, or are you mostly sticking with proprietary models?
I’m feeling a bit vendor-locked into Claude Code: it’s pricey, but it’s also annoyingly good
The YOLO mode is also good, but having a small ‘baby setting mode’ that’s not full-blown system access would make sense for basic security. Just a sensible layer of "pls don't blow my machine" without killing the freedom :)
I built on ADK (Agent Development Kit), which comes with many of the features discussed in the post.
Building a full, custom agent setup is surprisingly easy and a great learning experience for this transformational technology. Getting into instruction and tool crafting was where I found the most ROI.
> If you're uncomfortable with full access, run pi inside a container or use a different tool if you need (faux) guardrails.
I'm sick of doing this. I also don't want faux guardrails. What I do want is an agent front-end that is trustworthy in the sense that it will not, even when instructed by the LLM inside, do anything to my local machine. So it should have tools that run in a container. And it should have really nice features like tools that can control a container and create and start containers within appropriate constraints.
In other words, the 'edit' tool is scoped to whatever I've told the front-end that it can access. So is 'bash' and therefore anything bash does. This isn't a heuristic like everyone running in non-YOLO-mode does today -- it’s more like a traditional capability system. If I want to use gVisor instead of Docker, that should be a very small adaptation. Or Firecracker or really anything else. Or even some random UART connection to some embedded device, where I want to control it with an agent but the device is neither capable of running the front-end nor of connecting to the internet (and may not even have enough RAM to store a conversation!).
I think this would be both easier to use and more secure than what's around right now. Instead of making a container for a project and then dealing with installing the agent into the container, I want to run the agent front-end and then say "Please make a container based on such-and-such image and build me this app inside." Or "Please make three containers as follows".
As a side bonus, this would make designing a container sandbox sooooo much easier, since the agent front-end would not itself need to be compatible with the sandbox. So I could run a container with -net none and still access the inference API.
Contrast with today, where I wanted to make a silly Node app. Step 1: Ask ChatGPT (the web app) to make me a Dockerfile that sets up the right tools including codex-rs and then curse at it because GPT-5.2 is really remarkably bad at this. This sucks, and the agent tool should be able to do this for me, but that would currently require a completely unacceptable degree of YOLO.
(I want an IDE that works like this too. vscode's security model is comically poor. Hmm, an IDE is kind of like an agent front-end except the tools are stronger and there's no AI involved. These things could share code.)
I work on internal LLM tooling for a F100 at $DAYJOB and was nodding vigorously while reading this, especially when it comes to things like letting users freely switch between models, and the affordances you need to be able to provide good UX around streaming and tool calling, which seem barely thought-out in things like the MCP spec (which at least now has a way to get friendly display names for tools since the last time I looked at it).
This is how I prototyped all of mine. Console.Write[Line].
I am currently polishing up one of the prototypes with WinForms (.NET10) & WebView2. Building something that looks like a WhatsApp conversation in basic winforms is a lot of work. This takes about 60 seconds in a web view.
I am not too concerned about cross platform because a vast majority of my users will be on windows when they'd want to use this tool.
Small and observable is excellent.
Letting your agent read traces of other sessions is an interesting method of context trimming.
Especially, "always Yolo" and "no background tasks". The LLM can manage Unix processes just fine with bash (e.g. ps, lsof, kill), and if you want you can remind it to use systemd, and it will. (It even does it without rolling it's eyes, which I normally do when forced to deal with systemd.)
Something he didn't mention is git: talk to your agent a commit at a time. Recently I had a colleague check in his minimal, broken PoC on a new branch with the commit message "work in progress". We pointed the agent at the branch and said, "finish the feature we started" and it nailed it in one shot. No context whatsoever other than "draw the rest of the f'ing owl" and it just.... did it. Fascinating.
That's the main point of sub-agents, as far as I can tell. They get their own context, so it's much cheaper. You divide tasks into chunks, let a sub-agent handle each chunk. That actually ties in nicely with the emphasis on careful context management, earlier in the article.
Also please note this is nowhere on the terminal bench leaderboard anymore. I'd advise everyone reading the comments here to be aware of that. This isn't a CLI to use. Just a good experiment and write up.
https://github.com/willswire/dotfiles/blob/main/claude/.clau...
I would add subagents though. They allow for the pattern where the top agent directs / observe a subagent executing a step in a plan.
The top agent is both better at directing a subagent, and it keeps the context clean of details that don't matter - otherwise they'd be in the same step in the plan.
edit: referring to Anthropic and the like
One thing I do find is that subagents are helpful for performance -- offloading tasks to smaller models (gpt-oss specifically for me) gets data to the bigger model quicker.
You can sandbox off the data.
The agent I'm writing shares some ideas with Pi but otherwise departs quite drastically from the core design used by Claude Code, Codex, Pi etc, and it seems to have yielded some nice benefits:
• No early stopping ("shall I continue?", "5 tests failed -> all tests passed, I'm done" etc).
• No permission prompts but also no YOLO mode or broken Seatbelt sandboxes. Everything is executed in a customized container designed specifically for the model and adapted to its needs. The agent does a lot of container management to make this work well.
• Agent can manage its own context window, and does. I never needed to add compaction because I never yet saw it run out of context.
• Seems to be fast compared to other agents, at least in any environment where there's heavy load on the inferencing servers.
• Eliminates "slop-isms" like excessive error swallowing, narrative commenting, dropping fully qualified class names into the middle of source files etc.
• No fancy TUI. I don't want to spend any time fixing flickering bugs when I could be improving its skill at the core tasks I actually need it for.
It's got downsides too, it's very overfit to the exact things I've needed and the corporate environment it runs in. It's not a full replacement for CC or Codex. But I use it all the time and it writes nearly all my code now.
The agent is owned by the company and they're starting to ask about whether it could be productized so I suppose I can't really go into the techniques used to achieve this, sorry. Suffice it to say that the agent design space is far wider and deeper than you'd initially intuit from reading articles like this. None of the ideas in my agent are hard to come up with so explore!
When building something minimal, especially in areas like agent-based tooling or assistants, the challenge isn’t only about reducing surface area — it’s about focusing that reduction around what actually solves a user’s problem.
A minimal agent that only handles edge cases, or only works in highly constrained environments, can feel elegant on paper but awkward in practice. Conversely, a slightly less minimal system that still maintains clarity and intent often ends up being more useful without being bloated.
In my own experience launching tools that involve analysis and interpretation, the sweet spot always ends up being somewhere in the intersection of: - clearly scoped core value, - deliberately limited surface, and - enough flexibility to handle real user variation.
Curious how others think about balancing minimalism and practical coverage when designing agents or abstractions in their own projects.