(Works on older browsers and doesn't require JavaScript except to get past CloudSnare).
Verification has a high cost and trust is the main way to lower that cost. I don't see how one can build trust in LLMs. While they are extremely articulate in both code and natural language, they will also happily go down fractal rabbit holes and show behavior I would consider malicious in a person.
As we were debugging, my colleague revealed his assumption that I'd used AI to write it, and expressed frustration at trying to understand something AI generated after the fact.
But I hadn't used AI for this. Sure, yes I do use AI to write code. But this code I'd written by hand and with careful deliberate thought to the overall design. The bugs didn't stem from some fundamental flaw in the refactor, they were little oversights in adjusting existing code to a modified API.
This actually ended up being a trust building experience over all because my colleague and I got to talk about the tension explicitly. It ended up being a pretty gentle encounter with the power of what's happening right now. In hindsight I'm glad it worked out this way, I could imagine in a different work environment, something like this could have been more messy.
Be careful out there.
If someone uses an LLM and produces bug-free code, I'll trust them. If someone uses an LLM and produces buggy code, I won't trust them. How is this different from when they were only using their brain to produce the code?
I think what the author misses here is that imperfect, probabilistic agents can build reliable, deterministic systems. No one would trust a garbage collection tool based on how reliable the author was, but rather if it proves it can do what it intends to do after extensive testing.
I can certainly see an erosion of trust in the future, with the result being that test-driven development gains even more momentum. Don't trust, and verify.
It seems that LLMs, as they work today, make developers more productive. It is possible that they benefit less experienced developers even more than experienced developers.
More productivity, and perhaps very large multiples of productivity, will not be abandoned due roadblocks constructed by those who oppose the technology due to some reason.
Examples of the new productivity tool causing enormous harm (eg: bug that brings down some large service for a considerable amount of time) will not stop the technology if it being considerable productivity.
Working with the technology and mitigating it's weaknesses is the only rational path forward. And those mitigation can't be a set of rules that completely strip the new technology of it's productivity gains. The mitigations have to work with the technology to increase its adoption or they will be worked around.
> require them to be majority hand written.
We should specify the outcome not the process. Expecting the contributor to understand the patch is a good idea.
> Juniors may be encouraged/required to elide LLM-assisted tooling for a period of time during their onboarding.
This is a terrible idea. Onboarding is a lot of random environment setup hitches that LLMs are often really good at. It's also getting up to speed on code and docs and I've got some great text search/summarizing tools to share.
I’ve never heard of this cliff before. Has anyone else experienced this?
That said, I do think it would be nice for people to note in pull requests which files have AI gen code in the diff. It's still a good idea to look at LLM gen code vs human code with a bit different lens, the mistakes each make are often a bit different in flavor, and it would save time for me in a review to know which is which. Has anyone seen this at a larger org and is it of value to you as a reviewer? Maybe some tool sets can already do this automatically (I suppose all these companies report the % of code that is LLM generated must have one if they actually have these granular metrics?)
Sorry about the JS stuff I wrote this while also fooling around with alpine.js for fun. I never expected it to make it to HN. I'll get a static version up and running.
Happy to answer any questions or hear other thoughts.
Edit: https://static.jaysthoughts.com/
Static version here with slightly wonky formatting, sorry for the hassle.
Edit2: Should work on mobile now well, added a quick breakpoint.
At the moment LLMs allow me to punch far above my weight class in Python where I do a short term job. But then I know all the concepts from decades dabbling in other ecosystems. Let‘s all admit there is a huge amount of accidental complexity (h/t Brooks‘s Silver-bullet) in our world. For better or worse there are skill silos that are now breaking down.
Sure we can ask it why it did something but any reason it gives is just something generated to sound plausible.
I once had a member of my extended family who turned out to be a con artist. After she was caught, I cut off contact, saying I didn’t know her. She said “I am the same person you’ve known for ten years.” And I replied “I suppose so. And now I realized I have never known who that is, and that I never can know.”
We all assume the people in our lives are not actively trying to hurt us. When that trust breaks, it breaks hard.
No one who uses AI can claim “this is my work.” I don’t know that it is your work.
No one who uses AI can claim that it is good work, unless they thoroughly understand it, which they probably don’t.
A great many students of mine have claimed to have read and understand articles I have written, yet I discovered they didn’t. What if I were AI and they received my work and put their name on it as author? They’d be unable to explain, defend, or follow up on anything.
This kind of problem is not new to AI. But it has become ten times worse.
Wondering what they would be producing with LLMs?
There's a lot of posts about how to do it well, and I like the idea of it, generally. I think GenAI has genuine applications in software development beyond as a Google/SO replacement.
But then there's real world code. I constantly see:
1. Over engineering. People used to keep it simple because they were limited by how fast they can type. Well, those gloves sure did come off for a lot of developers.
2. Lack of understanding / memory. If I ask someone about how their code works, if they didn't write it (or at least carefully analyse it), it's rare for them to understand or even remember what they did there. The common answer to "how does this work?", went from "I think like this but let me double check" to "no idea". Some will be proud to tell you they auto generated documentation, too. If you have any questions about that, chances are you'll get another "no idea" response. If you ask an LLM how it works, that's very hit and miss for non-trivial systems. I always tell my devs I hire them to understand systems first and formost, building systems comes second. I feel increasingly alone with that attitude.
3. Bugs. So many bugs. It seems devs that generate code would need to do a lot more explicit testing than those who don't. There's probably just a missing feedback loop: When typing in code, you tend to have to test every little button action and so on at least once, it's just part of the work. Chances are you don't break it since you last tested it, so while this happens, manually written code generally has one time exhaustive manual testing built into the process naturally. If you generate a whole UI area, you need to do thorough testing of all kinds of conditions. Seems people don't.
So while it could be great, from my perspective, it feels like more of a net negative in practice. It's all fun and games until there's a problem. And there always is.
Maybe I have a bad sample of the industry. We essentially specialise on taking over technically disastrous projects and other kinds of tricky situations. Few people hire us to work on a good system with a strong team behind it.
But still, comparing the questionable code bases I got into two years ago with those I get into now, there is a pretty clear change for the worse.
Maybe I'm pessimistic, but I'm starting to think we'll need another software crisis (and perhaps a wee AI winter) to get our act together with this new technology. I hope I'm wrong.
I have instructions for agents that are different in some details of convention, e.g. human contributors use AAA allocation style, agents are instructed to use type first. I convert code that "graduates" from agent product to review-ready as I review agent output, which keeps me honest that I don't myself submit code without scrutiny to the review of other humans: they are able to prompt an LLM without my involvement, and I'm able to ship LLM slop without making a demand on their time. Its an honor system, but a useful one if everyone acts in good faith.
I get use from the agents, but I almost always make changes and reconcile contradictions.
While on the other hand real nation-state threat actors would face no such limitations.
On a more general level, what concerns me isn't whether people use it to get utility out of it (that would be silly), but the power-imbalance in the hand of a few, and with new people pouring their questions into it, this divide getting wider. But it's not just the people using AI directly but also every post online that eventually gets used for training. So to be against it would mean to stop producing digital content.
I found out very early that under no circumstances you may have the code you don't understand, anywhere. Well, you may, but not in public, and you should commit to understanding it before anyone else sees that. Particularly before sales guys do.
However, AI can help you with learning too. You can run experiments, test hypotheses and burn your fingers so fast. I like it.
Making these sort of blanket assessments of AI, as if it were a singular, static phenomena is bad thinking. You can say things like "AI Code bad!" about a particular model, or a particular model used in a particular context, and make sense. You cannot make generalized statements about LLMs as if they are uniform in their flaws and failure modes.
They're as bad now as they're ever going to be again, and they're getting better faster, at a rate outpacing the expectations and predictions of all the experts.
The best experts in the world, working on these systems, have a nearly universal sentiment of "holy shit" when working on and building better AI - we should probably pay attention to what they're seeing and saying.
There's a huge swathe of performance gains to be made in fixing awful human code. There's a ton of low hanging fruit to be gotten by doing repetitive and tedious stuff humans won't or can't do. Those two things mean at least 20 or more years of impressive utility from AI code can be had.
Things are just going to get faster, and weirder, and weirder faster.