We went from data mining to data fracking.
[0]: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
[1]: https://www.niemanlab.org/2026/01/news-publishers-limit-inte...
[2]: https://www.theregister.com/2024/05/16/wiley_journals_ai/
[3]: https://www.heise.de/en/news/OpenStreetMap-is-concerned-thou...
1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.
2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."
LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.
I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.
If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?
AI bots are literally DDOS'ing servers. Adoption is consuming and making both physical and computing resources either inaccessible or expensive for almost everyone.
The most significant one is the human cost. We suddenly found ourselves dealing with overwhelming levels of AI content/code/images/video that is mostly subpar. May be as AI matures we'll find it more easy and have better tools to work with the volume but for now it feels like it is coming from bad actors even when it is done by well meaning individuals.
There's no doubt AI has its uses and it is here to stay but I guess we'll all have to struggle until we reach that point where it is a net benefit. The hype by those financially invested isn't helping a bit though.
I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.
Here's the good news: AI cannot destroy open source. As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive. It wouldn't be a bad thing for us to standardize open source a bit more, like templates for contributors' guides, automation to help troubleshoot bug reports, and training for new maintainers (to help them understand they have choices and don't need to give up their life to maintain a small project). And it's fine to disable PRs and issues. You don't have to use GitHub, or any service at all.
> But it's not improving like it did the past few years.
As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?
AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue or implementing yet another 3D engine… now it is all extremely boring.
No; FABRICATED quotes. We have a perfectly good, correct word for what's going on.
> But I wouldn't run my production apps—that actually make money or could cause harm if they break—on unreviewed AI code.
I hope no one is actually letting unreviewed code through. AI can, and _will_ make mistakes.
Nowadays > 90% of my code tasks are handled by AI. I still review and guide it to produce what I intended to do myself.
From the article -
> It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.
I don't have a solution for this, I'm pointing to the flaw in the assumption that AI is destroying open-source.
From my observation, the people that are the most excited about AI are low skilled/unskilled people in that domain. If said people treated AI as a learning tool, everything would be great (I think AI can be a really effective teacher if you're truly motivated to learn). The problem is those people think they "now have the skill", even though they don't. They essentially become walking examples of the Dunning-Kruger effect (the cognitive bias where people with limited knowledge or competence in a particular domain greatly overestimate their own knowledge or competence)
The problem with being able to produce an artifact that superficially looks like a good product, without the struggle that comes with true learning, is you miss out on all the supporting knowledge that you actually need to judge the quality of the output and fix it, or even the taste to be able to guide the agent in good patterns vs poor patterns.
I'd encourage people that are obsessed with cutting edge AI and running 5000 Claude agents simultaneously to vibe code a website to take a step back and use the AI to teach them fundamentals. Because if all you can do is prompt, you're useless.
AI is a tool that must to be used well and many people currently raising pull requests seem to think that they don't even need to read the changes which puts unnecessary burden on the maintainers.
The first review must be by the user who prompted the AI, and it must be thorough. Only then I would even consider raising a PR towards any open source project.
What I found in the following week is a pattern of:
1) People reaching out with feature requests (useful) 2) People submitting minor patches that take up a few lines of code (useful) 3) People submitting larger PRs, that were mostly garbage
#1 above isn't going anywhere. #2 is helpful, especially since these are easy to check over. For #3, MOST of what people submitted wasn't AI slop per se, but just wasn't well thought out, or of poor quality. Or a feature that I just didn't want in the product. In most cases, I'd rather have a #1 and just implement it myself in the way that I want to code organized, rather than someone submitting a PR with poorly written code. What I found is that when I engaged with people in this group, I'd see them post on LinkedIn or X the next day bragging about how they contributed to a cool new open-source project. For me, the maintainer, it was just annoying, and I wasn't putting this project out there to gain the opportunity to mentor junior devs.
In general, I like the SQLite philosophy of we are open source, not open contribution. They are very explicit about this, but it's important for anyone putting out an open source project that you have ZERO obligation to accept any code or feature requests. None.
The bias in AI coding discussions heavily skews greenfield. But I want to hear more from maintainers. By their nature they’re more conservative and care about balancing more varied constraints (security, performance, portability, code quality, etc etc) in a very specific vision based on the history of their project. They think of their project more like evolving some foundational thing gradually/safely than always inventing a new thing.
Many of these issues don’t yet matter to new projects. So it’s hard to really compare the greenfield with a 20 year old codebase.
There is a temporary solution. Let maintainers limit PRs to accounts that were created prior to November 30 2022 [1]. These are known-human accounts.
Down the road, one can police for account transfers and create a system where known-human accounts in good standing can vouch for newer accounts. But for now that should staunch the bleeding.
There is a strong legal basis for this to happen because if you read the MIT license, which is one of the most common and most permissive licenses, it clearly states that the code is made available for any "Person" to use and distribute. An AI agent is not a person so technically it was never given the right to use the code for itself... It was not even given permission to read the copyrighted code, let alone ingest it, modify it and redistribute it. Moreover, it is a requirement of the MIT license that the MIT copyright notice be included in all copies or substantial portions of the software... Which agents are not doing in spite of distributing substantial portions of open source code verbatim, especially when considered in aggregate.
Moreover, the fact that a lot of open source devs have changed their views on open source since AI reinforces the idea that they never consented to their works being consumed, transformed and redistributed by AI in the first place. So the violation applies both in terms of the literal wording of the licenses and also based on intent.
Moreover, the usage of code by AI goes beyond just a copyright violation of the code/text itself; they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component involved that we don't have a system to protect human innovation from AI. Human IP is completely unprotected.
That said, I think most open source devs would support AI innovation, but just not at their expense with zero compensation.
Additionally Geerling raises good points, but I am not sure we should jump to his conclusion yet.
It never is. You know you’ve hit peak bubble when everyone you know is investing in the new hotness and saying, “This time it’s different.” When that happens, get ready to short the market.
A smaller number of PRs generated by OpenClaw-type bots are also doing so based on their owner's direct or implied instructions. I mean, someone is giving them GitHub credentials and letting them loose.
AI is also allowing the creation of many new open-source projects, led by responsible developers.
Given the exponential speed at which AI is progressing, surely the quality of such PRs is going to improve. But there are also opportunities for the open-source community to improve their response. It will sound controversial, but AI can be used to perform an initial review of PRs, suggest improvements, and, in extreme cases, reject them.
We are in the early days and I believe that things will get better as more people will calm the f down. People who have built things for ages will continue to do so, with or without coding agents.
In the long term, I think Open Source will win. I can imagine content management systems, eCommerce software, CRM, etc. to all become coding agent friendly - customer can customize the core software with agents and the scaffold would provide fantastic guardrails.
Self-hosting is already becoming way more popular than it ever was. People are downloading all sorts of tools to build software. Building is better. A structure needs to emerge.
AI has laid bare the difference.
Open Source is significantly impacted. Business models based on it are affected. And those who were not taking the political position find that they may not prefer the state of the world.
Free software finds itself, at worst, a bit annoyed (need to figure out the slop problem), and at best, an ally in AI - the amount of free software being built right now for people to use is very high.
Project after project reports wasted time, increased hosting/bandwidth bills, and all around general annoyance from this UTTER BULLSHIT. But every morning, we wake up, and its still there, no sign of it ever stopping.
I'm the sole maintainer for a gamedev "middleware" open source project for Godot, and all AIs have been generally crap about Godot stuff and frequently getting it wrong, but Codex helped me catch some future bugs that could have caused hard to spot mysterious behavior and a lot of head scratching.
I don't dare let it edit anything but I look at its suggestions and implement them my way. Of course it's still wrong sometimes, if I trusted it blindly I would be f'ed. A few times I had to repeatedly tell it about how some of its findings were incorrect or the intended behavior, until it relented with "You're right. My assumption was based on..."
Also, while I would [probably] never let AI be the source of any of my core code, it's nice for experiments and what-ifs: since my project is basically a library of more-or-less standalone components, it's actually more favorable for AI, to wire them together like prebuilt Lego blocks: I can tell it to "make a simple [gameplay genre] scene using existing components only, do not edit any code" and it lets me spot what's missing from the library.
In the end this too is a tool like everything else. I've always wanted to make games but I've always been sidetracked by "black hole" projects like trying to make engines and frameworks without ever actually making an actual full game, and I think it's time to welcome anything that helps me waste less time on the stuff that isn't an actual game :)
1. AI slop PRs (sometimes giant). Author responds to feedback with LLM generated responses. Show little evidence they actually gave any thought of their own towards design decisions or implementation.
2. (1) often leads me to believe they probably haven't tested it properly or thought of edge cases. As reviewer you now have to be extra careful about it (or just reject it).
3. Rise in students looking for job/internship. The expectation is that LLM generated code which is untested will give them positive points as they have dug into the codebase now. (I've had cases where they said they haven't tested the code, but it should "just work").
4. People are now even more lazy to cleanup code.
Unfortunately, all of these issues come from humans. LLMs are fantastic tools and as almost everyone would agree they are incredibly useful when used appropriately.
I mean I don’t want you sending PRs to my vibe coded project, but I also don’t care if you fork it to make useful for your needs
We’ve been so worried about the burden of forking in the past - maybe that should change?
Using AI to find relevant parts of a codebase, help you remember stuff like which annotations a data class needs for dB persistence(yes I'm a Java server dev, hi!) is awesome. Having Claude solo dev an application based on a prompt generated by gpt is something else entirely(pretty fun, but not very useful for anything more complicated than mega-trivial)
Open claw is like the third level to this that also exists for some reason.
Other than by corrupt criminals and mafia types who have a need to covertly hide cash.
And then the current administration wants the government to 'protect' crypto investors against big losses. Gotta love it.
Was NFT or Crypto a bubble? The idea of a bubble means that it "pops" in a dramatic fashion. NFT prices in aggregate faded slowly, and the impact it has only applies to a handful of individuals. Moreover, the behavior we have seen with crypto and nft can largely be speculated that the purpose was largely illicit financial engineering.
If a handful of bad PRs "are destroying open source," Open Source as a concept is surprisingly in a vulnerable project. No project worth its salt ever integrates unverifiable PRs. No valid OSS ever integrates uninvited PRs in the first place. Every PR is driven by an issue or a very robust that is specific description. Any project that receives an "unsolicited" PR does not make the project maintainer yell "Oh, I am ruined."
I have stopped checking out these programming content videos for the last year or so. But I stupidly did it here. Every single channel has become like Coffeezilla with an agenda, being AI as a catalyst of great harm.
There are definitely people abusing AI and lying about what it can actually do. However, Crypto and NFTs are pretty much useless. Many people (including me) have already increased productivity using LLMs.
This technology just isn't going away.
Open source software was trivially better in the nineties because it was done by people who would have and often did do it for free. Those people are better by simp.
The people bitching about it now didn't push back when it unified on a forge, or when it sold to Microsoft, or when it started working in like button stars.
They're bitching now that their grift is up.
LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect
If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.
Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.
It’s not cleverly generating new code; it’s just re-arranging code that it’s already seen. So, naturally, its usefulness is starting to plateau. The bulk of the improvements we’ll see from here on out will be better adaptability to specific applications.
It’s not destroying open source — but rather, making it accessible to everyone. That includes those that don’t understand it and people who have never had to go through the hazing that OSS culture tends to put newly inducted members through (“rtfm”, “benevolent dictators”, what have you, etc).
So without that culture of exclusive membership (by hazing), oss is now overwhelmed. It’s going to take time for the dust to settle from the stampede, and what’s left will be those who care about the craft and the art of software development. I liken it to what photography did to art, and how art has shifted.
One thing that LLMs will be really great for is accelerating learning. It’s now possible to tailor the output to suit individual needs even greater than before. I’m rather excited to see the possibilities of LLMs in the education space.
I'm a long time linux user - now I have more time to debug issues, submit them, and even do pull requests that I considered too time consuming in the past. I want and I can now spend more time on debugging Firefox issues that I see, instead of just dropping it.
I'm still learning to use AI well - and I don't want to submit unverified slop. It's my responsibility to provide a good PR. I'm creating my own projects to get the hang of my setup and very soon I can start contributing to existing projects. Maintainers on the other hand need to figure out how to pick good contributors on scale.
AI has been good for years now. Good doesn't mean perfect. it doesn't mean flawless. It doesn't mean the hype is spot-on. good means exactly that, it is good at what is intended to do.
It is not destroying open source either. If anything, there would be more open source contributors using AI to create code.
You can call anything done by AI "slop" but that doesn't make it so.
Daniel and the curl project were also over reacting. A reaction was warranted, but there were many measures they could have taken before shutting down bug reporting entirely.
If you replace "AI" with "junior dev", "troll" , "spammer", what would things be like then? If it is scale, you can troll, spam and be incompetent at scale just fine without the help of AI.
It's gatekeeping and sentimentality amplified.
I can't wait for people who call everything slop to be overshadowed by people who are so used to LLMs that their usage isn't different than using a linter, a compiler, an IDE, just another tool good at certain tasks but not others. abusable, but with reasonable mitigations possible.
I keep reading posts about what open source users are owed and not owed. Github restricting PRs, developers complaining about burnouts. Have you considered using AI "slop" instead? give a slop response to what you consider to be a slop request? Oh, but no, you could never touch "AI", that would stain you! (I speak to the over-reactors). You don't need AI, you could do anything AI can do (except AI doesn't complain about it all the time, or demand clout).
What is the largest bottleneck and hinderance to open source adaption? Money? No, many, including myself are willing to spend for it. I've even lucked out trying to pay an open source project maintainer to support their software. It's always support.
Support means triaging bugs, and feature requests in a timely manner. You know what helps with that a lot? A tool that understand code generation and troubleshooting well, along with natural language processing. A bot that can read what people are requesting, and give them feedback until their reports meet a certain criteria of acceptability, so you as a developer don't have to deal with the tiring back and forth with them. that same tool can generate code in feature branches. fix people's PR's so it meets your standards and priorities. highlight changes and how they affect your branch, prioritize them for you, so you can spend minimal time reviewing code and accepting or rejecting PRs.
If that isn't good for open source then what is?
Bad attitude towards AI is destroying open source projects led by people entrenched in an all-or-nothing false dichotomy mindset against AI. And AI itself is good. not great, not replace-humans-great, but good enough for it's intended use. great with cooperative humans in the decision making loop.
Use the best tool for the task!
that should be like #2 in the developer rule book, with #1 being:
It needs to work.
AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.
For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...