And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.
I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
At the moment it's "We don't need more contributors who aren't programmers to contribute code," which is from a reply and isn't representative of the original post.
The HN guidelines say: please use the original title, unless it is misleading or linkbait; don't editorialize.
They are responsible for it.
However, here's a different situation:
If the company you're working for requires you to use LLMs to code, I think it's 100% defensible to say "Oh, the AI did that" when there's a problem because the company required its usage. You would have done it better if the company hadn't forced you to cut corners.
I am responsible for ensuring copyright has not been violated with LLM generated code I publish. However, proving the negative, i.e. the code is not copyrighted is almost impossible.
I have experienced this - Claude came up with an almost perfect solution to a tricky problem, ten lines to do what I've seen done in multiple KLOC, and I later found the almost identical solution in copyrighted material.
Requiring people who contribute to "able to answer questions about their work during review." is definitely reasonable.
The current title of "We don't need more contributors who aren't programmers to contribute code" is an entirely different discussion.
We use plenty of models to calculate credit risk, but we never let the model sign the contract. An algorithm can't go to court, and it can't apologize to a bankrupt family.
"Human in the Loop" isn't just about code quality. It's about liability. If production breaks, we need to know exactly which human put their reputation on the line to merge it.
Accountability is still the one thing you can't automate.
With little icons of rocket ships and such.
I especially like the term "extractive contribution." That captures the issue very well and covers even non-AI instances of the problem which were already present before LLMs.
Making reviewer friendly contributions is a skill on its own and makes a big difference.
One thing I didn't like was the copy/paste response for violations.
It makes sense to have one. Just the text they propose uses what I'd call insider terms, and also terms that sort of put down the contributor.
And while that might be appropriate at the next level of escalation, the first level stock text should be easier for the outside contributor to understand, and should better explain the next steps for the contributor to take.
1. It shifts the cognitive load from the reviewer to the author because now the author has to do an elevator pitch and this can work sort of like a "rubber duck" where one would likely have to think about these questions up front.
2. In my experience this is a much faster to do this than a lonesome review with no live input from the author on the many choices they made.
First pass and have a reviewer give a go/no-go with optional comments on design/code quality etc.
I would never have thought that someone could actually write this.
Reading through the (first few) comments and seeing people defending the use of pure AI tools is really disheartening. I mean, they’re not asking for much just that one reviews and understands what the AI produced for them.
I also recently wrote a similar policy[0] for my fork of a codebase. I had to write this because the original developer took the AI pill, and starting committing totally broken code that was fulled of bugs, and doubled down when asked about it [1].
On an analysis level, I recently commented[2] that "Non-coders using AI to program are effectively non-technical people, equipped with the over-confidence of technical people. Proper training would turn those people into coders that are technical people. Traditional training techniques and material cannot work, as they are targeted and created with technical people in mind."
But what's more, we're also seeing programmers use AI creating slop. They're effectively technical people equipped with their initial over-confidence, highly inflated by a sense of effortless capability. Before AI, developers were once (sometimes) forced to pause, investigate, and understand, and now it's just easier and more natural to simply assume they grasp far more than they actually do, because @grok told them this is true.
[0]: https://gixy.io/contributing/#ai-llm-tooling-usage-policy
[1]: https://joshua.hu/gixy-ng-new-version-gixy-updated-checks#qu...
[2]: https://joshua.hu/ai-slop-story-nginx-leaking-dns-chatgpt#fi...
"To critique the fortress is not enough. We must offer a blueprint for a better structure: a harbor. A harbor, unlike a fortress, does not have a simple binary function of letting things in or keeping them out. It is an active, intelligent system with channels, docks, workshops, and expert pilots, all designed to guide valuable cargo safely to shore, no matter the state of the vessel that carries it. This is the model we must adopt for open source in a post-AI world. The anxiety over “extractive” contributions is real, but the solution is not a higher wall; it is a smarter intake process."
There needs to be a label which designates such open source projects that is so important and adopted throughout the industry that not anyone can throw patches to it without understanding what it does, and why they need it.
Was the worst thing to happen to programming, computer science I have seen, good for prototypes but not production software, and especially for important projects like LLVM.
It is good to gatekeep this slop from LLVM before it gets out of control.
This seems like a curious choice. At my company we have both Gemini and cursor (I’m not sure which model under the hood on that) review agents available. Both frequently raise legitimate points. Im sure they’re abusable, I just haven’t seen it