Lack of strictly enforced static typing make agents fail much sooner with Python. In my opinion, Rust and Scala are the best targets for agentic flows - and, coincidentally, they have the most advanced typers among mainstream languages.
But any statically typed language behaves better than any dynamically/duck typed language. When I say "better" I mean delivery time and the amount of shipped defects.
Another thing which helps (but not generally applicable) - ask your agent to verify critical protocols with formal proof in TLA+/lean/coq. Agents are bad at formal proofs - but generally are much better than most of the humans.
If you're using an LLM to write code I think the rules would be
1. Use a language you know really well so you can read it easily, and add to it as needed.
2. Use a language that has a large training set so the LLM can be most efficient.
3. Use a language that is easy to read.
If your language has a small training set or you don't intend to do much addition or you don't really know any language that well or are restricted from using choice 1 for some reason, 2 and 3 move up, and python has a large training set and it is easy to read.
I could write in brainfuck with ai, but I presume, wouldn’t get the same results than if going with python.
My follow up question: with AI now, why care about a lang until you need to?
However, if you are willing to stub your toes, retry, and pay more money, an entire new world opens up. Languages like python seem to fall apart faster in extremely large projects.
I've got a collection of interdependent .NET codebases with about 50 megs of raw source between them. Having C# be strongly typed seems like an essential backbone for keeping everything on rails in my agentic scenarios. The code edits have been flawless for several months now. I've got successful apply_patch usages that touch 20 files at a time. LLM code editing performance might be mostly language agnostic once we compensate for the strictness of the type system. More specifically, how much useful information is returned at compile time.
Compile time errors and warnings are probably the most powerful alignment mechanism available. Some ecosystems allow for you to specify your own classes of errors and warnings. I think tools like Roslyn Analyzers might be more powerful than unit tests in this application. Domain-specific compilation feedback feels like the holy grail to me.
https://learn.microsoft.com/en-us/visualstudio/code-quality/...
I actually (mostly) enjoy reading the code that the LLMs create in Nim. It's quick to read and look for refactor or cleanups. Compile times in seconds so the LLMs is usually the slow piece. It's fun and productive. With Python + LLMs I'm seeing them just create ever more layers of unmanageable cruft.
Recently I wanted "magic" behavior to get OpenAPI types and swagger.json along with auto parsing my rest APIs for me. I had Codex make a library for me using compile time reflection and a sprinkling of macros. Done, simple.
Is there some incentive I’m not seeing?
I do think enforcing correctness at the type system level is a good idea for AI, which is why I often choose languages like C# and Rust over Python. However, for some things Python is definitely the correct tool for the job.
1) Python is expressive and has packages for everything => faster iteration times because much fewer tokens
2) It doesn't require a compilation step, so when I'm quickly iterating on something, especially if my laptop doesn't have the target hardware, the flow "copy the sources to the target machine and restart" is superfast (a couple of milliseconds)
3) Python most likely represents the largest share of training data, so almost all LLMs can one-shot almost everything
And when my prototype is ready, and we want to go to production, I can ask the LLM to port it to Go with all the necessary conventions/ceremonies and all.
IME very few people think Go is harder than TS or JS - TS is quite complex and JS is a footgun range.
JS got popular for nontechnical reasons and TS is an attempt to make lemonade out of it.
Remember, you are the judge whether the code is OK and if you use assembler you might get really performant code, but can you trust it?
Of course it might be a good incentive to learn rust or go. Or challenge yourself to learn something really cool like LISP, COBOL, FORTRAN, APL or J. (just kidding...)
just my 2 ct...
Great! Let's look back on this not too far in the future.
Another benefit to using Python, is if you subscribe to writing/vibing a throwaway version first, a Python version is 100x better than a spec.
(Disclaimer: I teach Python and AI for a living and am doing a tutorial at pycon this week, Beyond vibe coding. Am also using other languages as there are times when Python isn't appropriate)
Rust isn't perfect due to rather long turnaround for compile/test iterations, but a lot of those can be avoided if the type checking is quicker than compilation. Rust is also more verbose than python and other very high level languages, which means your token budget is eaten more quickly as it works on a lower level.
Seriously though, almost all the examples in TFA are of rewriting existing code. It may be that Python is still best for the rapid dev iteration. Then sure, cross-compile into Rust via the LLM.
Plus, If we care about token usage counts, Python has a lot more opportunities for compact "import thing_I_need" than having to generate entire libraries in Rust.
No he didn't. The compiler is bascially useless as it produces vastly inferior code than gcc/clang.
Discussed here with 698 comments (https://news.ycombinator.com/item?id=47120899)
If the code were written in Java, I'd have more to read. If it were in JavaScript, I'd be slower following the calls (although the type system might catch issues more quickly - not a problem in my experience). I think Python is a good choice.
This means you don't have to muck around with supplying the right documentation for each version of each dependency, or worry about hallucinated interfaces (at least with the latest models).
In the past you'd have to dig through a foreign codebase manually to figure out why a documented interface for a dependency is not working as expected, but frontier models automate that quite well.
Frontend CSS/HTML is pretty bad though. Although they can work, it takes a lot of pushing. It's probably normal since they do not actually have eyes yet.
As a benefit i find that static types help AI to make correct/better decisitions than you see in PHP (where types are mostly only class types, nominal or primitive [lol no generics])
But its pretty much true, i will forsee a fall in dynamic languges, as the usecase is pretty much void and null.
Python does have a much larger ecosystem of course, so with Go you have to develop from scratch what already exists in Python. But for smaller projects, you can also have an AI write a clean-room implementation in Go of some project in Python. So you aren't necessarily locked into one ecosystem anymore.
And in my experience, you don't even need to know the language. I have a co-worker who's basically not a programmer, but got multiple implementations of applications working sooner than our dev teams doing it by hand. You should be a coder so you can architect and orchestrate the coding, but 'language' isn't a barrier anymore.
Also, totally FOSS. Unparalleled library ecosystem (no, I don't buy into the hype about re-rolling all your own dependencies.
Beyond that, Go is kind of nice, but the lack of a inheritance is stifling. Python has everything that's needed and very little that's not.
But also, I suspect the article is just wrong. "The hard languages got easy first" isn't true in practice and the impressive examples given are not representative or as magical as the poster makes them out to be.
The takeaway might be right in the end, but the post isn't right in the beginning.
My other problem with most of the other ecosystems: ts/npm, python/uv, rust/cargo is that they all have build-time scripts that are controlled by others that execute automatically. This is a real problem because the LLM will just install things and proceed to send your home directory through a juicer. I feel a bit of a paranoiac now doing this, but I have a script that launches a podman container with just the source directory and a binary directory loaded (for caching) which compiles everything.
I know there's some sequence of steps I can take to protect myself, but if the LLM accidentally uses pnpm to run dev build scripts when I had the right config on npm or whatever, I know I'm screwed. So now I do all these shenanigans with Rust (to the extent that I vendor old deps sometimes). So the ideal language to me now is one with very few of these footguns and sandtraps which has a tight iteration loop.
But on the other hand, maybe you could learn some other programming language, particularly with AI help. If that's what you wanted to do anyway, it seems like a good time to learn.
There's a niche available for a language which is relatively easy for a human to read, but with a very powerful at the expense of difficult to use type system. The language would let you make all sorts of assertions whose meaning are easy for the human to see, but to compile would need to come along with correctness proofs. The language is meant to be written by AI, which can battle the compiler, and write the proofs, but then read by humans who can verify that the AI wrote the program they wanted and/or direct the AI to make changes.
When I vibe, it's C# all the way. Not a popular opinion on HN, but the LLMs are trained heavily on the language and are very, very good at it, plus with the 1-file-per-class organization, it can stay pretty clean. I mean, v10 LTS was just released, with all kinds of new language features, EFCore is still the best ORM I've ever used, with full support for SQLite, Postgres, MySql, etc. It just makes writing and reviewing code a pleasure. And the LLMs don't f*ck it up.
/s... sort of
I observed this through observation of the attacks to Rust due to the huge presence of LGBT people.
Now while I'm pretty much straight myself, I don't reject LGBT people and don't want to partake in identity politics.
I just want things that works no matter what background you have, yet there are some people attacking Rust because of its inclusiveness nature.
And just like Linux is being perceived as nerdy and geeky and "gaming socks ready", the tokenization of things, and there attaching political meanings to it, are quickly coming to everything, so perhaps I'm too general here as well.
Let's say it is not political, but definitely adding more meanings to its technical origin and nature
For example low level converging to Rust, web frontends to something like React etc.
I can maintain the Python code myself and I can execute it everywhere.
If I let my LLM write in Rust then when things break I am out of luck. Also Rust needs to be compiled which means I can't just share the code as freely.
Rust in most cases, especially for back end.
Python when it's low risk (say monitoring dashboard or similar API heavy) or plays to python strengths (e.g. ML/AI - everything ML seems to be python).
Our simulation core components are pure Fortran, no libraries, all written by Claude/Cursor/Codex.
One of the big strengths of Python is legibility: most developers find it easy to read and understand.
If you are planning to have humans verify the code you're using in production, to confirm it implements your intent, the readability of the code you are producing is important.
Performance is valuable, but for a lot of code, performance is less important than correctness and ease of verifying it.
If you are imagining your codebase being one where nobody but Claude reads the code, you might as well do Rust for the better performance. But I don't think a lot of organizations are doing that.
When I use AI to help with coding, there is almost always a point where it gets stuck and I have to solve the problem myself. If I were using Rust at that point, it would be much more painful.
I know Rust has a very strong reputation in the community, but to be honest, I find it a difficult and frustrating language to work with. I would use it when I truly need systems-level performance, but for most high-level work I would rather use Python, because I can move much faster. In most projects, that level of raw performance is not actually necessary.
Asking Clodex to build me a hello world web backend in Rust, Go, Python: Python is read with great ease. Go is fine too, a bit verbose but still ok. Rust hurts my eyes.
I'd settle with Go for this use case.
I don't know why you would use Python at all except for small iterative projects. If you hate java for some reason, there's Go...
I would argue I spent more time fighting the TypeScript build system than Rust’s.
But up until recently I only used either just often enough to never remember what magic configuration needed to go in my tsconfig.json and package.json to get TypeScript to work.
My current goto for desktop apps is Tauri, which give us a rust backend and TS fronted (usually React). Local ML features can be easily loaded as a python sidecar. Production bundling can be a little challenging but it seems to work well so far.
Sidenote: Golang is also an amazing language for LLM use, I generally do most of my "infra" stuff in Golang over Rust, but either work fine most of the time.
Usually those kinds of utility scripts are one-shotted without any further input from me, and once they're there and doing what I need I usually don't bother converting them to whatever I would have written them in otherwise (bash would be my usual preference for really small scripts, typescript or rust for bigger utilities, I hate writing python but reading it is fine... kind of).
There you will find your answer.
I don't know rust at all and I've built three applications using it with Claude because it has speed and correctness built-in.
I use Typescript for 90% of the things I build. For web development I've used a number of tools, but mostly react, nextjs, or raw html/css/js. But if I were building an enterprise application I'd consider my team and whether opinionated (Angular) was optimal over flexible (React).
Each project should consider its own optimal tech stack.
There are many existing, often mature, third-party software libraries or solutions that a new project could use but which hide the internals, including how the data is organized behind the scenes*. Vibe-coding for the specific project requirements, instead of using the pre-existing third-party libraries, is now becoming a feasible option. The latter may be simpler (no features beyond the actual need), more flexible (easier to add new needed features), and the data/model behind could be more accessible.
Looking for feedback on pros/cons and experiences along this.
* I care for the data as it is can be longer-lived than the code itself.
Thanks.
Part of my worries that all this push to LLMs will marginalize niche programming languages from being used in startups since the lack of training data means falling back to hardcoding. a skill that I have a feeling will get increasingly niche overtime. I feel capitalism will basically render programming languages into a build artifact overtime.
I am from a Python background (11 years or so), PHP before that and C/C++ in college days. Rust works very well with coding agents. The amount of code in training data may be less but I would rather have the agent fight the compiler. Given that OpenAI and Anthropic seem interested in Rust, chances are that there is a ton of synthetic code generated with Rust.
1. Type safety as basic guard rails that LLM output is syntactically and schematically correct
2. Concise since you have to review a lot more code
3. Easy to debug / good observability since you can't rely on your understanding of the code. Something functional where you can observe the state at any moment would be ideal.
4. A very large set of public code examples across various domains so there's enough training data for the LLM to be proficient in that language
5. A large open source ecosystem of libraries to write less code and avoid the tendency for generated code to bloat
It's basically all the same things you look for in general. I think TypeScript scores high here but I'm curious if anyone knows of a language that fits these criteria better.
I think the rule of thumb is to use the tool that is right for the job and that you are going to be able to understand the output.
If anything this is a reason to keep using Python.
Honestly I am in the exact same boat thinking why I don’t write in C if Claude is writing it. However I chickened out thinking if support for ml model or llm based flows doesn’t exist in c then it will be time consuming to go to python then.
Clojure comes to mind at least.
But when I wanted to optimize and edit and reorganize bthe code it was difficult, so I did a rewrite in C and it was lighter and faster and simpler and less headache.
C for humans, rust for AI.
Personal: Rust/Go based on criticality of being able to glean code quickly, or memory usage, etc
> Agents broke that loop in a specific way: the unit of contribution shifted from the patch to the port.
What does this even mean? Every time there's a bug we port the whole code to a different language instead of patching it? This sounds like absolute nonsense, and makes me wonder whether a human actually wrote this.
This "fair weather development" approach feels very risky if that application is going to be exposed to any serious usage. There WILL be a situation when things break and the AI will be powerless to fix it (quickly) without breaking something else in a vicious loop. There WILL be a situation where things work fine and tests pass with 3 concurrent users but grind to a complete halt with 1000 because there is something O(N^2) deep in the code. And you NEED a human to save your day (which requires also proper architecture for that to be possible in the first place). If you don't plan for this, and just hope for the best, then you are building nothing more than a toy. And if you plan for this, then it matters again what the language is, and whether your team is proficient in it.
Or maybe I too old fashioned or too behind the state of the AI art...
You do want to use Rust with LLMs.
The reason you want it is simple, it's more constrained.
LLMs thrive on constraint and drown in freedom.
The further you can constraint the solution space the more likely you are to end up with a solution you like/is actually good.
Rust has several properties that make it really good for LLMs:
* Really robust type system that is also very expressive, if guided LLMs can implement most of the invariants in types which substantially increases the chances of success.
* Great compile time errors, the specificity and brevity (vs say C++ template expansion) means token efficient correction of syntax and/or borrow mistakes etc.
* Protection against subtle errors at compile time, namely data races and memory safety issues.
* Great corpus of well designed code and patterns, higher quality on average than some other ecosystems more favored by begineers/mass-market programming.
* Stdlib is strong, small-ish number of blessed crates.
* Context friendly, type signatures, errors, etc are all dense information.
* Also bias towards compile time checks means less runtime tests which means less toolcall time (and less tests needed overall) which in turn makes the process a ton faster.
I have been continually using Rust, Python and Kotlin since ~Jan this year and keeping track of my thoughts and I increasingly bias towards Rust now where I would have previously chosen Python or Kotlin instead just because I am lazy and I prefer the tool that the computer writes better so I have to write less lol.
However, I expect that in the future some new language will take this role of dual use.
And prompt does not replace that.
Give it 2 years, the ‘Blame the AI ‘ incidents will increase. Like an unfaithful partner you’ll always return to it
tldr 2% average point lost on Rust compared to python, gap vary by model, go has a better upper bound but opus had it 3% below python.
benchmark is a bit old but research on why is there, article is just vibes
2) it's practically verbose, not technically
3) it resembles pseudocode
4) batteries included shortcuts a lot of work
all of these reasons are a boon for LLM work.
GPT 5.5 writes good haskell.
The (well-known) Sapir–Whorf hypothesis (if dont know it, look it uop) is often invoked for natural languages, but there’s a pretty direct analogue for programming languages: the language you "think in" during solving a problem biases which abstractions and idioms you reach for first.
If you force an LLM to first solve a problem in a highly abstract language (Lisp, APL, Prolog) and only then later translate that solution to C++ or Rust, you’re effectively changing the intermediate representation the model works in. That IR has very different "affordance", e.g.
- Lisp pushes you toward recursive tree/list processing, higher‑order functions and macro‑like decomposition. (some nice web frameworks were initially written in LISP, scheme, etc...)
- APL pushes you toward whole‑array transforms, point‑free pipelines and exploiting data parallelism. (banks are still using it because of perforance)
- Prolog pushes you toward facts/rules, constraint satisfaction, and backtracking search. (it is a very high abstraction but might suit LLMs very well)
OK, and when you then translate that program into C++/Rust/python, a lot of this bias leaks through. You often end up with:
Rule engines, constraint solvers, or table‑driven dispatch code when the starting point was Prolog.
Iterator/functor pipelines and EDSL‑like combinators when the starting point was Lisp.
Data‑parallel kernels and "vectorized" loops when the starting point was APL.
In principle, an LLM could generate those idioms directly in C++/Rust. In practice, however, models are heavily shaped by their training distribution and default prompts. If you just say "write in Rust", they tend to regress towards the most common patterns in the corpus (framework‑heavy, imperative, not very aggressively functional or data‑parallel), even when the language would support richer abstractions.
By inserting a "thinking" step in a different paradigm, you bias the search over solution space before you ever get to Rust/C++. That doesn’t magically make the code better, but it does change which regions of the design space the model explores.
Same would also be true for python which is already a multi-idiomatic language. So it might be a good idea to learn a portfolio of different languages and then try to tackle a problem with a specific language instead of automatically using python/go/rust because of performance.
Something to consider...
p.s. how would a problem be solved when the LLM would have to write it first in erlang? Is it the automatically distributed?
p.p.s. the "design pattern" of the GoF comes automatically to my mind, which might be a good hint to the LLM to use.
2) The corpus for the sort of applications I build is likely larger for Python than it is for C++ and Rust. Bigger corpus == more training data == better generated code.
3) The bottleneck in the applications I run aren't in the execution of the code; they're in the database/network latency.
4) I don't get anything extra for pushing Rust or C++ over Python.
I started using Rust in 2018 and I've never used a build system that fought me less, ever, before or after.
I stopped reading after that sentence.
If you've managed software teams before, this won't be new. You just need to make sure the team does the right things. But you don't want to inject yourself on the critical path of everything. That's micro managing. People hate it and it's counter productive. You need to instead delegate responsibility and check that there is a good process with checks and balances that ensures things are done right.
If you are vibe coding, one shotting, etc. you are essentially operating without guards rails. You won't catch mistakes that are being made. You aren't doing the due diligence of verifying that what was delivered is the same as what was being asked for.
But if you do use guard rails, most of the engineering effort (i.e. your time) goes into building mechanisms to prove that what is being delivered is fit for purpose. And that needs to lean heavily on tools that verify things. Compilers, linters, test suites, headless browser based scenario tests, elaborate benchmarks, etc. Anything you can throw at this. The more the better. Even code quality issues are something you can catch and fix with tools. Code duplication issues are detectable. Poor cohesiveness and high coupling are simple metrics that you can optimize for.
With AI in the mix, all of that gets run automatically and you create a feedback loop where any introduced problem is more likely to be caught early. If you are a good senior engineer, you would have been doing all of this anyway. Because it compensates for your own inability to not make mistakes. With AI, you just need to do more of it.
I've dabbled with a few generated code bases in Go in the last few months. I have about 3 decades of experience with other languages. But not a lot of experience with Go. So, why did I pick it? It's not because I particularly like the language. It all looks a bit verbose and tedious to me and I've always preferred other languages. But since I'm not writing any code, I can step over that and make use of the fact that the compiler and build tools are really good and catch a lot of issues. By using Go, I'm leveraging the tool ecosystem around it. Which is really solid.
Because I don't read/write Go code, I'm forced to treat the system as a black box. Which means I just test the hell out of it in any way I can think of. When I don't know how, I ask the AI to suggest me ways. And it does, and I make it add those as well. My little system has performance benchmarks, end to end tests for everything, scenario tests testing complex scenarios, static code analysis, race detection, etc. And lots of unit tests. If I find any issue, I get paranoid about what else might be broken.
All I do is getting systematic about making it falsify the theory that it could all be broken by failing to produce a broken test scenario. I'm equally paranoid about code quality and technical debt. So, I make sure to check for that as well. Not manually of course. I simply ask the AI tool to do targeted reviews of code looking for duplication, adherence solid principles, etc. Any issues found are prioritized and addressed. With most quality issues, simply asking an LLM to look for such issues is surprisingly effective. Having guardrails just automates these checks and balances and makes them routine.
My inability to review at the line level no longer matters that much. Worse, me reviewing tens/hundreds of thousands of lines of code is probably counter productive. Even in languages I know well, it would take ages. I'd be the slowest part of the whole engineering process.
One of the reasons I dislike Go is because it's easy for most engineers to write really low grade code with it. But AI agents would probably not write the best code in any language anyway, so not much is lost.
Still, not ALL projects benefit from such an approach and there are times when yes python is the right tool. Not just due to readability of humans but the other qualities that make it really good for small, iterative apps.
My take has never changed. Knowledge is cheaper than ever, but wisdom is as rare as ever. This is a great example of misunderstanding the former for the latter
Therefore the "best" language is going to be whatever makes it easiest for humans to detect bugs, bad design, or that the "wrong thing" has been developed.
It doesn't matter if the 800-line if statement is able to use pattern matching.
There's been a lot of progress on making coding agents able to solve problems when they can easily evaluate in a closed loop, we desperately need something similar for controlling complexity and using relevant abstractions.
(Joke but also not a joke)
AI doesn't really write code for me, but I do use them to brainstorm/ask questions. Though, I do not use Python. I have never been a fan of the language. I still think Python is a perfectly serviceable language, but it would solve no (important) problems I have ever had better than any other language.
I can see why Python is appealing to many people, and I applaud Guido for all the work and oversight over the years, but Python lacks a lot of the things I like in a language.
I'm using coding tools to build a complex media-intensive application. The approach I'm taking is to build a _reference implementation_ in Python, which is in its design specifics, constrained to use patterns which transliterate into the actual deployment targets (iPadOS/MacOS/Web).
Why start with Python?
Because I can read it, reason about it, and run it, trivially, which are Good Things for the reference. I intend to have multiple targets; I'd rather relate them to a source of ground truth I am fluent in.
For what I'm doing, there is also a very rich set of prior art and existing libraries for doing various esoteric things—my spidey sense is that I'm benefiting from that. More examples, more discourse.
I'm out of the prediction business and won't say this is either a good model for every new project, or, one I will need in another N months/years.
But for the moment it sure feels like a sweet spot.
Ask me again though, after the reference goes gold and I actually take up the transliteration though... :)
b) Python code is easier to introspect, and set up test harnesses around. And also extend in agentic frameworks
c) LLMs are really good at translation. I can give it python code and it can translate it into C.
Lol good meme
That's already a glaring mistake. People could say perl's CPAN is great. Well, it did not save perl from declining in the last 20 years.
> The Python ecosystem is increasingly a Rust ecosystem wearing a Python hat.
Without statistics to prove this, this claim is useless.
Also, depending on Rust isn't that strange if a language is based on ... C. The only way I would disagree with such an argument were if Python were written in Python. But since it is syntactic sugar over C - just like ruby or perl are too - the argument to use Rust here is simply not different to using C. Perhaps Rust is better than C, but it is not fundamentally different. Whether Python were written in Rust or C is not a functional difference here.
As for AI becoming our new Overlord: I honestly do not want to depend on US mega-corporations. I am not disputing the fact that AI has objective use cases. I am objecting this herd mentality of everyone putting an AI chip into their brain now.
Damn AI slop zombies everywhere - it's like in the old B movie "They Live". But with less entertainment value than that. If they chew bubblegum then it is to slop up everything, not to kick ass.
> Klabnik vibe-coded a new language in Rust, therefore Claude + Rust = Good.
I argue the inverse -- Rust, being an ML-family language, is well suited for parsing, and language design (I know! Shocker!). In more moderate translation -- ML-style languages are good for parsing, interpreting and compiling code. Claude is not the magic here -- ML is.
I would also add that I've had decent success vibe-coding+human-coding Haskell (contrary to the article). My experience is that if I can hand-write a rich set of types (blessed be IxMonad), I can throw Claude to fill in the blanks for the implementations. If I can design the data structures that make the program tick, bridging them is something Claude is awesome at. Again, no surprise -- it's intern-level work.
The key distinction between C, Zig and Rust is that Rust is designed around types. C and Zig are more memory-oriented -- they really see most of your program as flat memory and you can kind of shoehorn a little bit of data layout in that flat memory. While this offers a large amount flexibility, this philosophy isn't well suited for proving out correctness. But again -- this doesn't mean they don't have a spot.
When I was a junior at Tesla, I used to joke that senior staff had a VMs in their heads, because that's really how you analyze C programs -- you try to execute it in your head, with interesting inputs, but that's about it. Claude's head-VM is quite fuzzy and often makes errors.
With Rust, if you design your type system, you prevent yourself from making dumb mistakes. Swap out "yourself" with Claude here and it's the same story.
I've yet to see Claude design really nice type systems, fwiw.
But the point is -- Claude is the enemy of beauty and correctness -- it's up to the SWE to design a type-system which will prevent it from doing so. To be clear, I obsess over type-systems personally, but that's not the only way -- incredibly rich, comprehensive, huge type systems, fuzzing, Antithesis, proptesting are all things you can do to minimize the impact of slop, and those are all valid things to do.
---
> Code is not written by humans therefore it doesn't matter that you don't know Rust.
Wouldn't say this was explicitly stated, but I definitely smelt this undertone throughout the article. If you don't understand the language you're reading, how can you understand whether the code in front of you is correct or not? If you have a systems engineer sitting across you to clean your PRs up, you can pass that responsibility onto them, but what about when they give their two weeks?
If all you know is Python, chances are you're going to make better software in Python than in Rust. Stick an `Arc<Mutex<T>>` everywhere and chances are your code will be slower, as a matter of fact. Use If you want to learn Rust, please join us! But if all you're trying to do is vibe-code better code -- do it in the language you know and can actually debug when shit hits the fan.
---
> Anthropic C Compiler
It is impressive that Claude is awesome at taking existing code and rewriting it, this is certain, but I'd like to repeat the exact same rhetoric that many have given -- rewriting =/= original authorship. Awesome, we have a C compiler, but we already had one, and we just rewrote it? Seems like a little bit of wasted electricity.
To build on top of this, I am really happy that Bun is exploring Rust, and the Claude rewrite is truly impressive, but quite surprising at times, preserving strange anti-patterns (my name being said anti-pattern, teehee): https://github.com/oven-sh/bun/blob/ffa6ce211a0267161ae48b82.... It's hard to determine why Claude decided this -- I assume a really strict input prompt.
Do note that the current stage of that PR is much better than what it was at the state of that commit, and obviously Jarred isn't merging blind slop, but that is still human-driven by someone who has an understanding of their product.
My bet is actually that _rewrites_ of already-functioning, well-tested code, are likely to be more common as time progresses. I think that's what Claude is really awesome at, and I think Claude can often achieve 80-20 improvements through rewrites. Again, Claude alone will not be a silver bullet -- it won't generate data-oriented programs if the source material wasn't data-oriented. It won't optimize for cache coherency, if the source didn't, but moving from Python to Rust alone, with more-or-less the same code structure, you're likely to see improvements by virtue of common operations being memory-coherent and avoiding the GIL and so on.
---
> A C compiler written in Rust used to be a graduate thesis. It isn’t anymore.
Come on, this is disingenuous -- a simple C compiler is a 1-day long project. LLVM is a graduate thesis (and for good reason). Copy-pasting prior-art is academic dishonesty and Claude does a lot of that.
---
For transparency: I work with Noah.
EDIT: Wanted to add that not a single line of my comment was AI generated.
When I run a game I don't care of the dev used C or whatever. Only programmers care about the syntactic representation.
I need the machine code/byte code patterns/geometric/color gradient data.
Eventually Python will be what you see on screen but no cPython interpreter program as we know it will be running
The model will have an internal awareness of the result to return without running an actual REPL
https://dev.to/zijianhuang/prompt-to-ai-generated-binary-is-...
FUCKING
ACCOMPLISHED
https://platform.claude.com/docs/en/agents-and-tools/tool-us... Also Claude Cowork, etc.
1. You don't need compilation... run and test faster. Compilers were primarily built to prevent human error, and only very secondarily to guard your business logic.
2. Your validators quite often need to evolve. With Python or JS, this is a pydantic edit + run. Imagine 3–4 iterations of the same in Rust?
3. Composition. The entire cycle of software changes. An agentic system takes orders from a human, reads some kind of cache and snippets, writes/combines snippets, tests it, runs it, and fixes it. This almost pushes you toward snippets the size of a function, which still need to be covered with tests. I can easily build 10 function-sized Python files and write an agent that will mix and match 3 of them into a final result. With a compiled language, you'd need to compile 10 times — or store the binaries and think about what platform they'll execute on, etc.
I love the fact that the author is questioning this. No doubt the market for your favorite language will change. 80% of languages will go away — there is no market anymore for such a big variety of languages.