Coding agents greatly reduce the barrier to contributing something that at least looks okay at the surface, so reviewing contributions will quickly become even more of a bottleneck. Manual contributions used to filter away most low effort attempts, or at least they could easily be identified and rejected.
That dynamic is now different and the maintainers risk being swarmed with low effort contributions, that will take a lot of time to review and respond to. Some AI contributions might be reviewed and revised and overall be of acceptable quality, but how can the maintainers know which without reviewing everything, good and bad alike.
I think we will see multiple attempts like this to shift things back to the old dynamic, by rejecting things that can be identified as AI generated a glance, but I suspect over time it will be difficult to do so, so my prediction is that we will soon see more open source repos stop accepting outside contributions entirely.
Even if LLMs one day will be good enough to quickly produce code that is on par with humans (which I strongly doubt), why would the contributors have any incentive to have someone else do that (the easy part), rather than just doing it themselves?
Fun while it lasted, huh?
So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.
How it is possible to distinguish between the two in the vast majority of cases where the hand written code and autocompleted code is byte-by-byte identical?
Are we supposed to record video of us coding to show that we did type letters one by one?
> 2. Recommending generative AI tools to other community members for solving problems in the postmarketOS space.
Is searching for pieces of code considered parts of solving problems?
Then how do we distinguish between finding a a required function by grepping code or by asking LLM to search for it?
Can we ask LLM questions about postmarketOS? Like, "what is the proper way to query kernel for X given Z"?
If a community members asks this question and I already know the answer via LLM, then am I now banned from giving the correct answer?
--
Don't get me wrong. I am sick and tired of the vomit-inducing AI bullshit (as opposed to the tremendous help that LLMs provide to experienced devs).
I fail to see how a policy like this is even enforceable let alone productive and sane.
On the other hand, I absolutely see where is this policy coming from. It seems that projects are having a hard time navigating the issue and looking for ways to eliminate the insurmountable amount of incoming slop.
I think we still haven't found a right way to do it.
AI use should be able to accelerate the development of ports on currently unsupported or undersupported devices which would directly support the project
I guess I wouldn't worry about the policy, they will probably naturally switch it if / when AI becomes more useful in practice
that ship has sailed with codex 5.3 in 90% SWE jobs, unfortunately. I expect the next 9% won't survive the following 12 months and the last 1% is done within 5 years.
it isn't even about principles - projects not using gen AI will become basically irrelevant, the pace of gen AI allowed competitors will be too great.