> When I made my 2029 prediction this is more-or-less the quality of result I had in mind.
There seems to be a lot of compensation and leniency made by the author here.
So, it is seemingly impressive that someone was able to use agents to build a browser.
But they used trillions of tokens? This equates to millions of dollars of spend. Are we really happy with this?
The browser itself is not fully complete. There's rendering glitches stated in the article. So millions of dollars for something that has obvious bugs.
This is also pure agent code. Can a code base like this ever be maintained by a team of humans? Are you vendor locked into a specific model if you want to build more features? How will support work? How will releases work? The lack of reflection over the rest of the software lifecycle except building is shocking.
So I'm not sure after reflecting, whether any of this is impressive outside of "someone with unlimited tokens built a browser using ai agents". It's the same class of problem being solved over and over again. Nothing new is really being done here.
Maybe it's just me but there's much more to software than just building.
Most of the big ones are things like skia, harfbuzz, wgpu - all totally reasonable IMO.
The two that stand out for me as more notable are html5ever for parsing HTML and taffy for handling CSS grids and flexbox - that's vendored with an explanation of some minor changes here: https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...
Taffy a solid library choice, but it's probably the most robust ammunition for anyone who wants to argue that this shouldn't count as a "from scratch" rendering engine.
I don't think it detracts much if at all from FastRender as an example of what an army of coding agents can help a single engineer achieve in a few weeks of work.
Not saying that this only happens with LLMs, in fact it should be compared against e.g. a dev team of 4-5
It's ability to pattern match it's way through a code base is impressive until it's not and you always have to pull it back to reality when it goes astray.
It's ability to plan ahead is so limited and it's way of "remembering" is so basic. Every day it's a bit like 50 first dates.
Nonetheless seeing what can be achieved with this pseudo intelligence tool makes me feel a little in awe. It's the contrast between not being intelligence and achieving clearly useful outcomes if stirred correctly and the feeling that we just started to understand how to interact with this alien.
Although I dissented on the decision, we banned the use of AI. Outside of the project I've been enjoying agentic coding and I do think it can be used already today to build production-grade software of browser-like complexity.
But this project shows that autonomous agents without human oversight is not the way forward.
Why? Because the generated code makes little sense from a conceptual perspective and does not provide a foundation on which to eventually build an entire web engine.
For example, I've just looked into the IndexedDB implementation, which happens to be what I am working on at the moment in Servo.
Now, my work in Servo is incomplete, but conceptually the code that is in place makes sense and there is a clear path towards eventually implementing the thing as a whole.
In Fastrender, you see an Arc<Mutex<Database>> which is never going to work, because by definition a production browser engine will have to involve multiple processes. That doesn't mean you need the IPC in a prototype, but you certainly should not have shared state--some simple messaging between threads or tasks would do.
The above is an easy coding fix for the AI, but it requires input from a human with a pretty good idea of what the architecture should look like.
For comparison, when I look at the code in Ladybird, yet another browser project, I can immediately find my way around what for me is a stranger codebase: not just a single file but across large swaths of the project and understand things like how their rendering loop works. With Fastrender I find it hard to find my way around, despite all the architectural diagrams in the README.
So what do I propose instead of long-running autonomous agents? The focus should shift towards demonstrating how AI can effectively assist humans in building well-architected software. The AI is great at coding, but you eventually run into what I call conceptual bottlenecks, which can be overcome with human oversight. I've written about this elsewhere: https://medium.com/@polyglot_factotum/on-writing-with-ai-87c...
There is one very good idea in the project: adding the web standards directly in the repo so it can be used as context by the AI and humans alike. Any project can apply this by adding specs and other artifacts right next to the code. I've been doing this myself with TLA+, see https://medium.com/@polyglot_factotum/tla-in-support-of-ai-c...
To further ground the AI code output, I suggest telling it to document the code with the corresponding lines from the spec.
Back in early 2025 when we had those discussions in Servo about whether to allow some use of AI, I wrote this guide https://gist.github.com/gterzian/26d07e24d7fc59f5c713ecff35d... which I think is also the kind of context you want to give the AI. Note that this was back in the days of accepting edits with tabs...
At a minimum:
1. You've got an incredibly clearly defined problem at the high level.
2. Extremely thorough tests for every part that build up in complexity.
3. Libraries, APIs, and tooling that are all compatible with one another because all of these technologies are built to work together already.
4. It's inherently a soft problem, you can make partial progress on it.
5. There's a reference implementation you can compare against.
6. You've got extremely detailed documentation and design docs.
7. It's a problem that inherently decomposes into separate components in a clear way.
8. The models are already trained not just on examples for every module, but on example browsers as a whole.
9. The done condition for this isn't a working browser, it's displaying something.
This isn't a realistic setup for anything that 99.99% of people work on. It's not even a realistic setup for what actual developers of browsers do who must implement new or fuzzy things that aren't in the specs.
Note 9. That's critical. Getting to the point where you can show simple pages is one thing. Getting to the point where you have a working production browser engine, that's not just 80% more work, it's probably considerably more than 100x more work.
AI makes it cheap (eventually almost free) to traverse the already-discovered and reach the edge of uncharted territory. If we think of a sphere, where we start at the center, and the surface is the edge of uncharted territory, then AI lets you move instantly to the surface.
If anything solved becomes cheap to re-instantiate, does R&D reach a point where it can’t ever pay off? Why would one pay for the long-researched thing when they can get it for free tomorrow? There will be some value in having it today, just like having knowledge about a stock today is more valuable than the same knowledge learned tomorrow. But does value itself go away for anything digital, and only remain for anything non-copyable?
The volume of a sphere grows faster than the surface area. But if traversing the interior is instant and frictionless, what does that imply?
I think a good abstractions design and good test suite will make it break success of future coding projects.