- This is nice! I tried it for a bit and it was indeed quite fast.
Are you looking for contributors, or are you building this as a personal tool?
I ran into some issues when attempting to use different models, though: gpt-5.5 on Azure doesn't work, even with the OpenAI compatible endpoint, because "max_tokens" has been replaced with "max_completion_tokens". And it doesn't appear possible to pass through custom headers, so I wasn't able to specify reasoning_effort for deepseek models.
- Thanks, I've been tooling away in my spare time on my own version of this -- both to get a deeper understanding of agents (everyone suggests writing your own) and to help learn Rust. I'd like to retain `pi`'s configurability though, the ability to self-mutate and generate new tools is incredibly useful, particularly because I don't think any of these things should have access to arbitrary code execution through `bash` (of course, if they have access to, say, `edit` and `cargo run` they still have arbitrary code exec, but...) (so I tend to generate tools on the fly when I encounter something the no-bash agent needs to do).
- This is much needed!
Compared to Codex CLI, Claude Code is insanely slow.
$ time claude --version
2.1.143 (Claude Code)
________________________________________________________
Executed in 4.39 secs fish external
usr time 29.68 millis 0.26 millis 29.41 millis
sys time 71.30 millis 1.30 millis 70.00 millis
5 seconds to show me the version number!I'm guessing Claude Code also needs a rewrite in Rust. But from what I saw in the leaked TypeScript code, a line-to-line port will be pretty bad. It requires a new architecture that matches Rust idioms
by throwa356262
6 subcomments
- "RAM footprint: ~8MB on an empty session, ~12MB when working"
I like this, Claude Code is using multiple gigabytes, which is really annoying on lowend laptops
- I (somewhat jokingly) wrote one recently too... https://github.com/pnegahdar/nano in under 200 lines. Repl, sessions, non-interactive, approvals, etc
The smarter the models get the less the harnesses matter (outside of devx).
Maybe one day I'll run it through swebech.
by hiAndrewQuinn
3 subcomments
- The codebase was small enough that I handed it over to DeepSeek v4 Flash in Pi to skim through for any risky business, and I didn't find anything concerning. Nice work.
by 360MustangScope
1 subcomments
- Funny this comes out today. I was just about to start to write one in rust. It's amazing having opencode slowly leak memory and end up becoming 6gbs on a large project and then get slower and slower.
Will check this out! Seems cool!
by phplovesong
0 subcomment
- Does anyone use claude with custom agents? IIRC they banned the use, and only allow claudes own agent.
- i built something with a similar philosophy here: https://github.com/khimaros/airun -- it is intended to be piped and redirected. it discovers skills, AGENTS and prompt templates from Claude Code, Pi.dev, OpenCode and others. no TUI, but does have a basic tool calling loop
$ airun -q -p 'output a shell command for linux to display the current time. output only the command with no other code fencing or prose' | airun -q -s 'review the provided shell command, determine if it is safe, run it only if it is safe, and then summarize the output from the command' --permissions-allow='bash:date *'
- Looks promising, is OpenAI subscription support planned?
- I absolutely like this. Pi becomes sluggish after installing a couple of extensions. I myself was trying to port Pi to Rust but it was consuming too much tokens.
Is there any API like Pi so that I can create extensions.
by inciampati
2 subcomments
- > Integrated Ralph Wiggum loops: looping capabilities for long-horizon tasks
Imo, this shouldn't be embedded in the executor layer. Orchestration should handle this.
by joeyguerra
0 subcomment
- the war of the coding agents has begun.
- As you can see, writing a coding agent in a compiled language makes a ton of sense and gives the benefits of running multiple agents efficiently instead of running into leaks and tools consuming gigabytes of RAM.
by sergiotapia
1 subcomments
- Given agent harnesses affect so much of the performance of models, it would be great to see some kind of benchmark on how this tool performs compared to claude/codex/opencode/pi etc.
by noodletheworld
0 subcomment
- Are agent harnesses the new web framework?
Everyone wants to write one, building a new one is easy to start with, but tough to get to “prod ready” and the landscape is littered with failed attempts?
Certainly feels like it.
This is really good though; works well and at least has a clearly articulated raison d'être.
by choopachups
0 subcomment
- dude, im actually in disbelief how long we put up with the pile of shit that is claude code.
by usernametaken29
2 subcomments
- Now make it into an IntelliJ plugin which has proper access to the search index. I’ll pay for it. For Christs sake it’s insane JetBrains hasn’t figured this out yet
- this is what I've been waiting for
a low level language. please no more scripting language TUIs!
by slopinthebag
1 subcomments
- I love these. Coding agents aren't very difficult to build, it's a TUI + tools + getting a nice agent loop working. The hardest part seems to be supporting all of the different providers and model quirks. What is interesting is seeing the experimentation: some provide tons of tools, others provide a single python interpreter and have the agent use tools via sandboxed python scripts, others use minimal tools and lean on bash. Personally I want a harness that gives a ton of control to the user to let them steer the LLM, less agent and more augmentation. Maybe I'll have to build it myself. If anyone has ideas, let me know.
- [flagged]
by edgardurand
0 subcomment
- [flagged]
by phoebe_builds
0 subcomment
- [flagged]
- [flagged]
by nimchimpsky
0 subcomment
- [dead]
by brcmthrowaway
0 subcomment
- !RemindMe 6 months
by andrew_kwak
0 subcomment
- Been hearing a lot about Rust lately. I'm curious how Zerostack handles concurrency compared to more traditional Unix tools. Anyone tried it for something CPU-intensive?
by tencentshill
0 subcomment
- This may be the most HN post I have ever seen.