by cheesecompiler
5 subcomments
- > I personally think all of this is exciting. I’m a strong supporter of putting things in the open with as little license enforcement as possible. I think society is better off when we share, and I consider the GPL to run against that spirit by restricting what can be done with it.
I like sharing too but could permissive only licenses not backfire? GPL emerged in an era where proprietary software ruled and companies weren't incentivized to open source. GPL helped ensure software stayed open which helped it become competitive against the monopoly proprietary giants resting on their laurels. The restriction helped innovation, not the supposedly free market.
by marcus_holmes
4 subcomments
- > For me personally, what is more interesting is that we might not even be able to copyright these creations at all. A court still might rule that all AI-generated code is in the public domain, because there was not enough human input in it. That’s quite possible, though probably not very likely.
As I understand it, the US Supreme Court has just this week ruled exactly this. LLM output cannot be copyrighted, so the only part of any piece of software that can be copyrighted is that part that was created by a human.
If you vibe-code the entire thing, it's not copyrightable. And if it can't be copyrighted that means it is in the public domain from the instant it was created and can't be licensed.
by vbarrielle
0 subcomment
- The test suite was also licensed under the LGPL. The reimplementation can be seen as a derivative work of the test suite, and thus should fall under the LGPL. This does not even mention the fact that the coding agent, AND the user steering it, both had ample exposure to chardet's source code, making it hard to argue that the reimplementation is a new ship.
- In this emerging reality, the whole spectrum of open-source licenses effectively collapses toward just two practical choices: release under something permissive like MIT (no real restrictions), or keep your software fully proprietary and closed.
These are fascinating, if somewhat scary, times.
- Strange this with this whole incident apart from the rewrite/LLM part is the general misundrstanding of the licences. LGPL being a pretty permissive one going as far as allowing one to incorporate it in propriety code without the linking reciprocity clause [1] and MIT is even more permissive.
Importantly these were meant to protect the USER of the code.Not the Dev , or the Company or the CLA holder - the USER is primary in the FreeSoftware world.Or at least was supposed to be , OSS muddied the waters and forgetting the old lessons learned when thing were basically bigcorp vs indie hacker trying to getthir electronic device to connect to what they want to connect to and do what they need is why were here.
Bikeshedding to eventually come full circle to understand why those decisions were made.
In a world where the large OEMs and bigcorps are increasinly locking down firmware , bootloaders , kernels and the internet. I would think a reappraisal of more enforcement that benefits the USER is paramount.
Instead we have devs looking to tear down the few user protections FLOSS provides and usher in a locked down hacker unfiendly future.
[1] https://licensecheck.io/blog/lgpl-dynamic-linking
- hopefully this continues to show how awkward the idea of "intellectual property" (IP) is until people abandon it
IP sounds good in theory but enables things like "patent trolling" by large corps and creating all kinds of goofy barriers and arbitrary questions like we're asking about if re-implementations of ideas are "really ours"
(maybe they were never anyone's in the first place, outside of legally created mentalities)
ideas seem to fundamentally not operate like physical things so asserting they can be considered "property" opens the door for all kinds of absurdities like as pondered in the OP
by globular-toast
0 subcomment
- > The motivation: enabling relicensing from LGPL to MIT.
Good heavens, that's incredibly unethical. I suppose I should expect nothing more from a profession that has shied away from ethics essentially since its conception.
> I think society is better off when we share
Me too.
> and I consider the GPL to run against that spirit by restricting what can be done with it.
The GPL explicitly allows anyone to do anything with it, apart from not sharing it.
You want me to share with you, but you don't want to share with me.
- > Unlike the Ship of Theseus, though, this seems more clear-cut: if you throw away all code and start from scratch, even if the end result behaves the same, it’s a new ship.
That's not how copyright works. It doesn't require exact copies. You also can't just rephrase an existing book from scratch when the ideas expressed are essentially the same. Same with music.
by 7777777phil
1 subcomments
- The legal question is a distraction. GPL was always enforced by economics: reimplementation had to cost more than compliance. At $1,100 for 94% API coverage, it doesn't. Copyleft was built for a world where clean-room rewrites were painful but they aren't anymore.
by PaulDavisThe1st
2 subcomments
- US courts have ruled that machine generated code cannot be copyright. Ergo, it cannot be licensed (under any license; nobody owns the copyright, thus nobody can "license" it to anyone else).
You cannot (*) use LLMs to generate code that you then license, whether that license is GPL, MIT or some proprietary mumbo-jumbo.
(*) unless you just lie about this part.
by Splinelinus
3 subcomments
- I'm waiting for AGPL to become AIGPL:
If you train a model with some or all of the licensed work, you agree that the weights of that model constitute a derivative work, and further for the weights, as well as any inference output produced as a result of those weights to be bound by the terms of the license.
If you run a model with the licensed work in part or in full as input, you agree that any output from the model is bound by the terms of the license.
- > I’m a strong supporter of putting things in the open with as little license enforcement as possible. I think society is better off when we share, and I consider the GPL to run against that spirit by restricting what can be done with it.
This is a head-spinning argument. The whole point of GPL is to force more things out into the open. You'd think someone who espouses open source would cheer the GPL. The only practical difference between MIT and GPL is that the former allows more closed-source code.
This feels analogous to the paradox of freedom. Truly unlimited freedom would include the freedom to oppress others, so "freedom maximalism" is an unsound philosophy (unless applied solipsistically).
When I publish, I tend to do so under MIT. I also write plenty of closed-source code. And I do generally believe in open source. But I don't use that as a justification for preferring MIT. If anything, I like MIT despite believing in open source, not because. Mainly because I want people to actually use what I wrote.
- The solution to this whole situation seems pretty simple to me. LLMs were trained on a giant mix of code, and it's impossible to disentangle it, but a not insignificant portion of their capabilities comes from GPL licenced code. Therefore, any codebase that uses LLM code is now GPL. You have a proprietary product? Not anymore.
Not saying there's a legal precedent for that right now, but it's the only thing that makes any sense to me. Either that or retain the models on only MIT/similarly licenced code or code you have explicit permission to train on.
- > But this all causes some interesting new developments we are not necessarily ready for. Vercel, for instance, happily re-implemented bash with Clankers but got visibly upset when someone re-implemented Next.js in the same way.
Kinda surprised nobody commented on this
- Perhaps code licensing is going to become more similar to music.
e.g. Somebody wrote a library, and then you had an LLM implement it in a new language.
You didn't come up with the idea for whatever the library does, and you didn't "perform" the new implementation. You're neither writer nor performer, just the person who requested a new performance. You're basically a club owner who hired a band to cover some tunes. There's a lot involved in running a club, just like there's a fair bit involved in operating a LLM, but none of that gives you rights over the "composition". If you want to make money off of that performance, you need to pay the writer and/or satisfy whatever terms and conditions they've made the library available under.
IANAL, so I don't even know what species of worms are inside this can I've opened up. It seems sensible, to me, that running somebody else's work through a LLM shouldn't give you something that you can then claim complete control over.
---------
Edit: For the sake of this argument, let's pretend we're somewhere with sensible music copyright laws, and not the weird piano-roll derived lunacy that currently exists in the U.S..
by mellosouls
0 subcomment
- Note the Ship of Theseus, while a nice comparison for the title, is not - as the author eventually points out - an appropriate analogy here. A fundamental contribution to the idea of whether the identity of the entity persists or not is the continuity between intermediate states.
In the example given and discussed here the last couple of days there seems to be a process more akin to having an AI create a cast of the pre-existing work and fill it for the new one.
- It's funny that real value is now in test suites. Or maybe it's always been...
- I think at the core this is a problem of abuse of the commons and parasitic and extractive behavior being tolerated as a norm.
How would I defend myself against hostile entities and societal norms that make it OK to steal from me and my effort without compensation? I will close my doors, put up walls, and distrust more often.
That's clearly the trend the world is going towards and I don't see that changing until we find some a way to make it cheaper to detect deception and parasitic behavior along with holding said entities accountable. Since our world leaders have had a history of unaccountable leadership and they are whom model this behavior, I have difficulty seeing the norms change without drastic worldwide leadership change.
by StephenHerlihyy
0 subcomment
- At what point does the cost of reimplementation shrink below the benefits of obfuscation? Consider a new CVE in Linux. Well maybe my Linux is not the same as the public one. Maybe I just set a swarm of AI agents on making me a drop in replacement that is different but with an identical interface. Same-same but different. Right now writing your own OS to replace the entirety of Linux would be costly and error prone. Foolish. But will it always? What happens when Claude Code Infinute Opus can 1-shot a perfect reimagining in 24 hours? Or 30 minutes? Do all my servers have the same copy or are they all slightly different implementations of the same thing? I dunno.
- Porting code from one programming language to another will be one of the most important tasks of code gen A.I.
Imagine doing the same with vehicle engines. Less fuel consumption, less pollution, less weight and who knows how many more benefits.
Just letting the A.I. do it by itself is sloppy though. The real benefit is derived only when the resulting port is of equal or better quality than the original. It needs a more systematic approach, with a human in the loop and good tools to index and select data from both codebases, the original and the ported one. The tools are not invented yet but we will get there.
by jFriedensreich
0 subcomment
- Non-permissive licenses, open core and proprietary software will just not survive. There is no reality in which I or anyone in my community would use something like eg. raycast or the saas email clients that someone locks down and does rent extraction and top down decisions on. The experience of being able to change anything about the software i use with a prompt while using it is impossible to come back from to all the glitches, limitations and stupidities. we have to come to terms with infinite software.
by LucasAegis
3 subcomments
- AI is merely a sophisticated tool. If your original thoughts achieve a tangible result through this tool, the ownership should reside with the thinker. Reverse-engineering, in this context, shouldn't be seen merely as an infringement on AI-generated code, but as a violation of the human intellect and systemic design that orchestrated that code. We need to move past protecting 'lines of code' and start protecting the 'intent and architecture' behind them.
- Maybe, just maybe, this whole AI thing could result in us collectively waking up and realizing copyright is entirely unsuitable for software.
by andsoitis
1 subcomments
- > I’m a strong supporter of putting things in the open with as little license enforcement as possible.
> © Copyright 2026 by Armin Ronacher.
Oooohkaaaay?
by philipwhiuk
0 subcomment
- Meanwhile elsewhere: https://www.theguardian.com/technology/2026/mar/06/uk-arts-m...
- This is awful news, but I don't know what can be done, is it possible to have a new GPL4 that deals with this? I doubt it.
by cheesecompiler
0 subcomment
- After cloning a test suite you're still left with ongoing maintenance and development, maintaining feature parity etc. There's a lot more than passing a test suite. If the rewrite is truly superior it deserves to become the new Ship of Theseus. But e.g. I doubt anyone's AI rewrites of SQLite will ever put a dent in its marketshare.
- Pretty simple, if the model was trained on GPL or any copyleft then the output is copyleft (in whole or in part!) you just have a really long preprocessing step before hitting compile.
by ChrisMarshallNY
0 subcomment
- > slopforks
Good term.
For myself, I tend to have a similar view as the author (I publish MIT on most of my work), but it’s not really something I’m zealous about, and I’m not really into “slopforking” the work of others. I tend to prefer reinventing the wheel.
- Translate an alternative?
https://github.com/albfernandez/juniversalchardet
- I think the reimplementation in question rubs people the wrong way because of the intentions of parties on both ends and the ignoring of one of them by the other (erasure of, from some POV). The original author of the code obviously chose the license they did intentionally (copyleft "keep it open" reasons, seemingly). And the the rewrite author has their intentions as well (unknown beyond "less restrictions on derivative"). The problem comes when those intentions conflict, and in this case the rewrite author basically just ignored the usual convention to resolve the conflict, which is forking or just starting a new project. Claiming "I've maintained it for a while so I can do whatever I want" is kinda gross because is just completely overrides the original authors' intention with their own. They're basically saying "my intentions as maintainer are more important than the creator's", and that doesn't feel even. The "is it a real clean-room" due to prior exposure due to LLM training and working on the codebase is always going to be contentious. But "should I override erase someone else intentions?" question is easy to answer. No. Especially since we have come up with so many ways to make it easy not to (forking is practically free, the abstraction of APIs is powerful, etc).
It also just feels a little nefarious. There isn't much reason to change between those licenses in question beyond to allow it to be more tightly integrated into something commercial and closed-source. In which case, having an LLM write a compatible rewrite _in a new project_ seems reasonable at the current moment in time. It's this intentional overriding of the original intentions, seemingly _for profit_ as well, that is the grossest part, because the alternatives are just so easy and common.
by infinitewars
0 subcomment
- The ship never existed, only the idea of a ship.
by davidcollantes
0 subcomment
- > Right now I would argue that unless some evidence of the contrary could be provided, this can be seen as a new implementation from ground up.
Not ship of Theseus, but a "new implementation from ground up.
Evidently, the author prefers MIT (https://github.com/chardet/chardet/issues/327#issuecomment-4...), and seems OK with slop-coding.
- I mean, it has to be asked... was the source of chardet not in the training set...?
by __mharrison__
0 subcomment
- Licensing is done. Reimplementation will be to easy...
- > There is an obvious moral question here, but that isn’t necessarily what I’m interested in.
Interestingly that‘s also the exact same spot I stopped reading.
The dilution of morals weakens societies. We ignore them at our own peril, the planet and most certainly any god figure doesn’t care.
- > There is an obvious moral question here, but that isn’t necessarily what I’m interested in.
And thus we arrive at the absolute shit state the world is in. We keep putting morality aside for something “more interesting” then forget to consider it back in when making the final point.
“Have you tried: “kill all the poor?””
https://youtube.com/watch?v=s_4J4uor3JE
by radarsat1
1 subcomments
- This is interesting because I've been considering a similar project. I maintain a package for a scientific simulation codebase, it's all in Fortran and C++ with too much template code, which takes ages to build and is very error prone, and frankly a pain to maintain with its monstrous CMake spaghetti build system. Furthermore the whole thing would benefit with a rewrite around GPU-based execution, and generally a better separation between the API for specifying the simulation and the execution engine. So I've been thinking of rewriting it in Jax and did an initial experiment to port a few of the main classes to Python using Gemini. It did a fairly good job. I want to continue with it, but I'm also a bit hesitant because this is software that the upstream developers have been working on for 20+ years. The idea of just saying to them "hey look I rewrote this with AI and it's way better now" is not something I would do without giving myself pause for thought. In this case it's not about the license, they already use a permissive one, but just the general principle of suggesting a "replacement" for their work.. if I was doing it by hand it might be different, I don't know, they might appreciate that more, but I have no interest in spending that much time on it. Probably what I will do is just present the PoC and ask if they think it's worth attempting to auto-convert everything, they might be open to it. But yeah, the possibilities of auto-transpiling huge amounts of software for modernization purposes is a really interesting application of AI, amazing to think of all the possibilities. But I'm happy to have read the article because I certainly didn't think about the copyright implications.
- [dead]
by StacyRawls
0 subcomment
- [dead]
- I know it's a bit off-topic, but https://www.youtube.com/watch?v=DTYnzLbHUHA
- [flagged]
by moralestapia
3 subcomments
- [flagged]
- > A court still might rule that all AI-generated code is in the public domain, because there was not enough human input in it. That’s quite possible, though probably not very likely.
Its not only likely, it is in fact the current position, at least in the US.