GET /hello/:world
|> jq: `{ world: .params.world }`
|> handlebars: `<p>hello, {{world}}</p>`
describe "hello, world"
it "calls the route"
when calling GET /hello/world
then status is 200
and output equals `<p>hello, world</p>`
Here's a WIP article about the DSL:https://williamcotton.com/articles/introducing-web-pipe
And the DSL itself (written in Rust):
https://github.com/williamcotton/webpipe
And an LSP for the language:
https://github.com/williamcotton/webpipe-lsp
And of course my blog is built on top of Web Pipe:
https://github.com/williamcotton/williamcotton.com/blob/mast...
It is absolutely amazing that a solo developer (with a demanding job, kids, etc) with just some spare hours here and there can write all of this with the help of these tools.
But here's the Ruby version of one of the scripts:
BEGIN {
result = [1, 2, 3, 4, 5]
.filter {|x| x % 2 == 0 }
.map {|x| x * x}
.reduce {|acc,x| acc + x }
puts "Result: #{result}"
}
The point being that running a script with the "-n" switch un runs BEGIN/END blocks and puts an implicit "while gets ... end" around the rest. Adding "-a" auto-splits the line like awk. Adding "-p" also prints $_ at the end of each iteration.So here's a more typical Awk-like experience:
ruby -pe '$_.upcase!' somefile.txt ($_ has the whole line)
Or: ruby -F, -ane '$F[1]' # Extracts the second field field -F sets the default character to split on, and -a adds an implicit $F = $_.split.
That is not to detract from what he's doing because it's fun. But if your goal is just to use a better Awk, then Ruby is usually better Awk, and so, for that matter, is Perl, and for most things where an Awk script doesn't fit on the command line the only reason to really use Awk is that it is more likely to be available.The part i found neat was that i used a local LLM (some quantized version of QwQ from around December or so i think) that had a thinking mode so i was able to follow the thought process. Since it was running locally (and it wasn't a MoE model) it was slow enough for me to follow it in realtime and i found fun watching the LLM trying to understand the language.
One other interesting part is the language description had a mistake but the LLM managed to figure things out anyway.
Here is the transcript, including a simple C interpreter for the language and a test for it at the end with the code the LLM produced:
https://app.filen.io/#/d/28cb8e0d-627a-405f-b836-489e4682822...
It even comes with an auto translator for converting awk to Perl: https://perldoc.perl.org/5.8.4/a2p
It also provides all the features of sed.
The command line flags to learn about to get all these features are: -p -i -n -l -a -e
Anyway, I have/had an obscene amount of Claude Code Web credits to burn, so I set it to work on implementing a completely standalone Rust implementation of Perchance using documentation and examples alone, and, well, it exists now [1]. And yes, it was done entirely with CCW [2].
It's deterministic, can be embedded anywhere that Rust compiles to (including WASM), has pretty readable code, is largely pure (all I/O is controlled by the user), and features high-quality diagnostics. As proof of it working, I had it build and set up the deploys for a React frontend [3]. This also features an experimental "trace" feature that Perchance-proper does not have, but it's experimental because it doesn't work properly :p
Now, I can't be certain it's 1-for-1-spec-accurate, as the documentation does not constitute a spec, and we're dealing with randomness, but it's close enough that it's satisfactory for my use cases. I genuinely think this is pretty damn cool: with a few days of automated PRs, I have a second, independent mostly-complete interpreter for a language that has never had one (previous attempts, including my own, have fizzled out early).
[0]: https://perchance.org/welcome [1]: https://github.com/philpax/perchance-interpreter [2]: https://github.com/philpax/perchance-interpreter/pulls?q=is%... [3]: https://philpax.me/experimental/perchance/
Worked on the first run. I mean, the second, because the first run was by default a dry run printing a beautiful table, and the actual run requires a CLI arg, and it also makes a backup.
It was a complete solution.
Purely interpretive implementation of the kind you'd write in school, still, above and beyond anything I'd have any right to complain about.
Anyway so far I haven't been able to get any nice result from any of the obvious models, hopefully they're finally smart enough.
but I learned a ton building this thing. it has an LSP server now with autocompletion and go to definition, a type checker, a very much broken auto formatter (this was surprisingly harder to get done than the LSP), the whole deal. all the stuff previously would take months or a whole team to build. there's tons of bugs and it's not something I'd use for anything, nu shell is obviously way better.
the language itself is pretty straightforward. you write functions that manipulate processes and strings, and any public function automatically becomes a CLI command. so like if you write "public deploy $env: str $version: str = ..." you get a ./script.shady deploy command with proper --help and everything. it does so by converting the function signatures into clap commands.
while building it I had lots of process pipelines deadlocking, type errors pointing at the wrong spans, that kind of thing. it seems like LLMs really struggle understanding race conditions and the concept of time, but they seem to be getting better. fixed a 3-process pipeline hanging bug last week that required actually understanding how the pipe handles worked. but as others pointed out, I have also been impressed at how frequently sonnet 4.5 writes working code if given a bit of guidance.
one thing that blew my mind: I started with pest for parsing but when I got to the LSP I realized incremental parsing would be essential. because I was diligent about test coverage, sonnet 4.5 perfectly converted the entire parser to tree-sitter for me. all tests passed. that was wild. earlier versions of the model like 3.5 or 3.7 struggled with Rust quite a bit from my experience.
claude wrote most of the code but I made the design decisions and had to understand enough to fix bugs and add features. learned about tree-sitter, LSP protocol, stuff I wouldn't have touched otherwise.
still feels kinda lame to say "I built this with AI" but also... I did build it? and it works? not sure where to draw the line between "AI did it" and "AI helped me do it"
anyway just wanted to chime in from someone else doing this kind of experiment :)
it would be nice when people do these things give us a transcript or recording of their dialog with the LLM so that more people can learn.
I think I was the first to write an LLM language and first to use LLMs to write a language with this project. (Right at ChatGPT launch, gpt-3.5 https://github.com/nbardy/SynesthesiaLisp
https://github.com/GoogleCloudPlatform/aether
This was completely vibe coded - I never had to edit the code, though it was very interactive. The whole thing tool less than a month of some evenings and weekends.
(Note: it’s ugly on purpose, as I’m playing with ideas around languages that LLMs would naturally be effective using.)
jslike (acorn based parser)
https://github.com/artpar/jslike
https://www.npmjs.com/package/jslike
wang-lang ( i couldn't get ASI to work like javascript in this nearley based grammar )
https://www.npmjs.com/package/wang-lang
As I understand, this would require somehow “saving the state” of the LLM, as it exists after the last prompt — since I don’t think the LLM can arrive at the same state by just being fed the code it has written.
Did you also review the code that runs the tests?
I was dreaming of a JS to machine code, but then thought, why not just start from scratch and have what I want? It's a lot of fun.
While working in C, can’t count number of times I wanted to return an array
It's interesting comparing what different LLMs can get done.
In other words, LLMs eat this up.
I'm sorry Dave, I'm afraid I can't do that. I cannot implement this 24 bit memory model.I have a slight feeling it would suck even more than, say, PHP or JavaScript.
A math module that is not tested for division by zero. Classical LLM development.
The suite is mostly happy paths, which is consistent with what I've seen LLMs do.
Once you setup coverage, and tell it "there's a hidden branch that the report isn't able to display on line 95 that we need to cover", things get less fun.
This is exactly the problem. When I first got my mitts on Claude Code I went bonkers with this kind of thing. Write my own JITing Lisp in a weekend? Yes please! Finish my 1/3rded-done unfinished WASM VM that I shelved? Sure!
The problem comes, that you dig too deep and unearth the Balrog of "how TF does this work?" You're creating future problems for yourself.
The next frontier for coding agents is these companies bothering to solve the UX problem of: how do you keep the human involved and in the driver's seat, and educated about what's happening?
https://www.bloomberg.com/news/articles/2025-11-19/how-the-p...