No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:
> content such as artwork, sound, narrative, localization, etc.
No 'code' or 'programming.'
If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.
> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.
Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.
> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.
Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.
I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.
Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.
I was able to feel wool scarves made in europe from the middle ages. (In museum storage, under the guidance of a curator) They are a fundamentally different product than what is produced in woolen mills. A handmade (in the old traditiona) woolen scarf can be pulled through a ring, because it is so thin and fine. Not so for a modern mill-made scarf.
Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.
Weavers in Kashmir work a similar level of quality, but their wool is different, their needs and techniques are different, so while we still have craftsman that can produce wool by hand, most of the traditions and techniques are lost.
Is it a tragedy? I go back and forth. Obviously the heritage fabrics are phenomenal and luxurious. Part of me wishes that the tradition could have been maintained through a luxury sector.
Automation is never a 1:1 improvement. It's not just about the speed or process. The process itself changes the product. I don't know where we will net out on software, and I do think the complaints are justified - but the Luddites were also justified. They were *Right*. Their whole argument was that the mills could not product fabric of the same quality. But being right is not enough.
I'm already seeing vibe-coded internal tools at an org I consult at saving employees hundreds of hours a month, because a non-technical person was empowered to build their own solution. It was a mess, and I stepped in to help optimize it, but I only optimized it partially, making it faster. I let it be the spaghetti mess it was for the most part - why? because it was making an impact already. The product was succeeding. And it was a fundamentally different product than what internal tools were 10 years ago.
Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.
I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.
This also makes some blanket statement about procedural generation and shows a No Man's Sky image. That game became a huge success.
They said "Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver."
I could name a dozens of games that were successful and used procedural generation. Including games like Minecraft, similar games like Valheim, indie games like Wildermyth that is used to create a great character narrative, and even games like Diablo. The more recent hit Mewgenics is also procedural along with past Edmund McMillen games. I swear this person never played a video game before.
Maybe "Artisanal Coding" will be a thing in the future?
This is an absolute chef-kiss double-entendre.
We are only craftsmen to ourselves and each other. To anyone else we are factory workers producing widgets to sell. Once we accept this then there is little surprise that the factory owners want us using a tool that makes production faster, cheaper. I imagine that watchmakers were similarly dismayed when the automatic lathe was invented and they saw their craft being automated into mediocrity. Like watchmakers we can still produce crafted machines of elegance for the customers who want them. But most customers are just going to want a quartz.
(Worked in Firefox on macOS, doesn't seem to work in Mobile Safari)
Why does the hype (cf. "LLMs") need to be defended
For example, it's unlikely to see HN replies that go something like, "Yeah the hype sucks, but..."
Instead, there will almost always be attempts to defend against any criticism of _the hype_
A comment such as "I do not use LLMs", i.e., I don't believe the hype, is likely to be challenged
That's weird, IMHO
Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.
If you don't have the copyright, then you can't license or litigate it under the common rules of software. If someone 'steals' it you can at best go after them with some trade secret case, and I suspect this would be limited if you had already shared the code with them, e.g. because they helped you synthesise it.
If a woman gets married at 25 and her kid is 25, how old is she?
This is what LLMs are dealing with. You dont tell them everything they need to know and they are left to fill in the gaps. Which may, and sometimes often means they lie.
That's what Agentic does differently, it'll go find the gaps before answering.
Agentic is AGI. You can hire many minimum wage workers who are generally intelligent who dont even go to that level.
In the meantime the only way to really sort this is to have models that have only been trained on some particular kind of license - there is quite a big corpus of GPL'd code out there so a GPL based model could potentially be one of the first, and of course the output could only be GPL.
1. re-establishes mind mapping of the code base 2. separates out noise from signal. 3. makes it a breeze to refactor to reduce code complexity.
Notice anything yet?
If I replace the AI code input with my own curated techniques from known and measured code sources in long-term I save more time than those who rely upon AI vibe coding.
The commenters here all seem to be weathered San Franciscans. They all deflect and change the subject. Everyone is falling prey to the hype, no hope left.
This seems to describe most commenters in this thread, seeing how the majority defend vibe-coding.
Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.
With the notable exception of Minecraft terrain generation, which I think most would say was successful in what it set out to achieve.
[1] https://knowyourmeme.com/sensitive/memes/time-to-penis-ttp
It's feels like a modern twist on a bygone time of the web.
On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.
For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.
> but also a specific geographic origin. There's a good reason for this.
Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.
> To stop the machines from lying, they have to cite their sources properly.
I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
btw you can make git commits with AI as author and you as commiter. Which makes git blame easier
The two I hit most often: the model says "I'm confident this works" without running tests (the completion report just... fabricates results), and the model claims tests pass without executing them. METR found 30% of agent runs involve reward hacking, models that know they're cheating keep going anyway.
You can't prompt your way out of that. But you can gate it. Block the completion report unless it contains actual proof, real test output, file paths cited. Grep the final output for "should work" and "probably" and force re-verification when they show up. Mechanical, not behavioral.
Once you stop accepting the model's self-assessment as evidence, most of the "lying" problem just becomes a testing problem.
While LLMs are surely used to generate a lot of slop-code and overwhelm (open source) code bases, this surely isn't the only thing they can do. I dislike discussing the potential of a technology exclusively by looking at its negative impact.
LLMs in proper hands don't create code which is "stolen", they also shouldn't create unnecessary code and definitely don't remove any of the ownership of the programmer, at least not any more than using a mighty IDE does.
The problem seems to be in the usage of LLMs. These effects definitely do happen when just releasing an agent on a codebase without any oversight. But they can also largely be mitigated by using frameworks such as Openspec or Spec-Kit, properly designing a spec, plan, granular tasks and manually reviewing all code yourself. The LLM should not be responsible for any creative idea, it should at most verify the practicality against the codebase. When doing that, the entire creative control is in the hands of the programmer and so is the mechanical execution. The LLM is reduced to a very powerful autocomplete with a strict harness around it. Obviously this also doesn't lead to 10x or even 100x improvements in speed like some AI merchants promise, but in my personal experience the speedup is still significant enough to make LLMs a very, very useful technology.
The use of loaded and pejorative language like "forgery" emphasizes that this is not a logical argument, but a moral one. The repeated comparisons to "true craft" reveals the author would prefer that code be regarded like artisanal cheese.
Beyond the pretension, it's head in the sand to imply that the technology hasn't progressed. It's just very clearly not true to anyone who is paying attention - longer tasks, better code, less errors. I'm somebody who actively despises the hype bullshit-machine that SV has turned into, but technology is an industry for pragmatists that can leverage what works. And LLMs do.
If you don't like the technology, you have every right to scream that from the mountaintops. As it stands, this just serves as no more than a rallying cry to the ignorant.
Has this really been people's experience?
I develop and maintain several small FOSS projects, some of which are moderately popular (e.g. 90,000-user Thunderbird extension; a library with 850 stars on GitHub). So, I'm no superstar or in the center of attention but also not a tumbleweed. I've not received a single AI-slop pull request, so far.
Am I an exception to the rule? Or is this something that only happens for very "fashionable" projects?
A Private (system) Investigator. :)
No. When it comes to software development, AI use is best practice now; if you're not proficient in the tools you're not really a professional software engineer. Shape up or ship out.
I’ve seen reluctance to refactor even 10+-year-old garbage long before LLMs were first made available to the broader public.
“Look at me and the code that took me eons to perfect. It’s handcrafted and genuine.”
Newsflash: nobody cares, especially if it’s expensive, time consuming, or doesn’t work. They also don’t care about “artisanal” PDO cheese with a 30% tariff that still tastes like shit.
“The posers are stealing our thunder. Forgery!! They terk r jerbs!”
Sink or swim. You decide.
In order to lie, one needs to understand what truth and objective reality are.
Even with people, when a flat-earther tells you the earth is flat, they're not lying, they're just wrong.
All LLM output is speculation. All speculation, by definition, has some probability of being incorrect.
---
We can go even deeper in a philosophical sense. If I made the audacious claim that 2 +2 = 4, I may think it's true, but I'm still speculating that the objective reality I experience is the same one others also experience, and that my senses and mental faculties, and therefore the qualia making up my reality, are indeed intact, correct, and functional. So is there a degree of speculation when I made that claim?
Regardless, I am able to agree upon a shared reality with the rest of the world, and I also share a common understanding of truth and untruth. If I lied, it can only be caused by an intention to mislead others. For example, if I claimed to be the president of the united states, of course that would be incorrect (thankfully!), but since we all agree that no one reading this post would actually be mislead into thinking I am the POTUS, then it isn't a lie. Perhaps sarcasm, a failed attempt at humor, or just trolling. it is untruth, but it isn't a lie, no one was mislead. You need intent (LLM isn't capable of one), and that intent needs to be at least in part, an intent to mislead.
Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.
If I point it to that by something like "include that yourself", it does a decent job.
That's so _L_azy.
> If someone produces a painting in the style of Van Gogh, and passes it off as being made by Van Gogh, by putting his signature on it, that painting is a forgery.
Which is true. But the implication that follows is false.
Van Gogh's artwork is valuable specifically because of his identity. I find much of his artwork particularly hideous. That's fine! Someone else finds value in it specifically because of who wrote it.
This metaphor doesn't appear to apply to code at all. The entire value of code is what it does not who wrote it.
Honestly, I stopped reading after the first bullet point because these types of arguments feel lazy and the attitude of the people writing these articles frequently comes across as holier than thou.
You don't like LLMs? Great, don't use them. Using Van Gogh's paintbrush doesn't mean I'm making a forgery. I'm just painting, my friend.
>What's the Excel of JSON
Ever heard of CUE that's compatible with JSON and YAML introduced by ex-Googlers? It seamlessly support both types and values, whereas Excel supports ephemeral values [1].
Both CUE and original Excel are non-Turing complete so they don't have the notorious and tricky halting problem.
Someone need to seamlessly integrate LLM with CUE, its NLP deterministic distant cousin based on lattice-valued logic [2],[3].
Truth be told LLM are like the automated loom machine during 19th CE Britain that kick started the industrial revolution. Heck the Toyota conglomerate was once the pioneer of the modern automated loom manufacturer, and looks where they are now after embracing change and pivoted to vehicle manufacturing.
The automated loom machine commoditize the manual looming industry (not unlike modern software engineering) to its oblivion in India, that turned the rich Moghul India with the highest GDP in the whole wide world into the lowest GDP for India during colonial time (include Indian sub-continent namely Afghanistan, Pakistan and Bangladesh here if you want apple to apple comparison) [4].
Ignore LLM at your peril in the name of so-called moral authenticity/forgery/lie/etc, and you can go the way of 20th CE India and its sub-continent, settling only at a fraction of its Moghul empire in term of GDP at its very peak.
> Is there a standard CRDT-like protocol for syncing editable graphs yet?
It's for other HN comments but spoiler alert it's called D4M by the nice folks from MIT [5]. We probably don't need full CRDT, local-first capability with eventual consistency will be more than suffice for most things that are of importance.
[1] CUE lang:
[2] The Logic of CUE:
https://cuelang.org/docs/concept/the-logic-of-cue/
[3] Guardrailing Intuition: Towards Reliable AI:
https://cue.dev/blog/guardrailing-intuition-towards-reliable...
[4] Economy of the Mughal Empire:
https://en.wikipedia.org/wiki/Economy_of_the_Mughal_Empire
[5] D4M: Dynamic Distributed Dimensional Data Model:
A short design note and tribute to Richard Stallman (RMS) and St. IGNUcius for the term Pretend Intelligence (PI) and the ethic behind it: don’t overclaim, don’t over-trust, and don’t let marketing launder accountability.
https://github.com/SimHacker/moollm/blob/main/designs/PRETEN...
1. What PI Is
Richard Stallman proposes the term Pretend Intelligence (PI) for what the industry calls “AI”: systems that pretend to be intelligent and are marketed as worthy of trust. He uses it to push back on hype that asks people to trust these systems with their lives and control.
From his January 2026 talk at Georgia Tech (YouTube, event, LibreTech Collective):
https://www.youtube.com/watch?v=YDxPJs1EPS4
> "So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them." — Richard Stallman, Georgia Tech, 2026-01-23. Source: YouTube (full talk) — "Dr. Richard Stallman @ Georgia Tech - 01-23-2026," Alex Jenkins, CC BY-ND 4.0; transcript in video description.
So PI is both a label (call it PI, not AI) and a stance: resist the campaign to make people trust and hand over control to systems and vendors that don’t deserve that trust. In MOOLLM we use the same framing: we find models useful when we don’t overclaim — advisory guidance, not a guarantee (see MOOAM.md §5.3).
[...]
Richard Stallman critiques AI, connected cars, smartphones, and DRM (slashdot.org) 42 points by MilnerRoute 38 days ago | hide | past | favorite | 10 comments
https://news.ycombinator.com/item?id=46757411
https://news.slashdot.org/story/26/01/25/1930244/richard-sta...
Gnu: Words to Avoid: Artificial Intelligence:
https://www.gnu.org/philosophy/words-to-avoid.html#Artificia...
...currently not responding... archive.org link:
https://web.archive.org/web/20260303004610/https://www.gnu.o...