ChatGPT is free and available to everyone, and so are a dozen other LLMs. If the person making the comment wanted to know what ChatGPT had to say, they could just ask it themselves. I guess people feel like they’re being helpful, but I just don’t get it.
Though with that said, I’m happy when they at least say it’s from an LLM. At least then I know I can ignore It. Worse is replying as if it’s their own answer, but really it’s just copy pasted from an LLM. Those are more insidious.
If anyone gives me an opinion from an AI, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.
They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.
Acceptable comments could be: - I agree with the AI for xyz reasons, please fix. - I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.
If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.
Like, it's fine for you to use AI, just like one would use Google. But you wouldn't paste "here are 10 results I got from Google". So don't paste whatever AI said without doing the work, yourself, of reviewing and making sense of it. Don't push that work onto others.
Stories full of nonsensical, clearly LLM-generated acceptance requirements containing implementation details which are completely unrelated to how the feature actually needs to work in our product. Fine, I didn't need them anyway.
PRs with those useless, uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about. But fine, I guess, being able to read, understand and evaluate the code is part of my job as a reviewer.
---- < the line
PRs littered with obvious LLM comments you didn't care enough to take out, where something minor and harmless, but _completely pointless_ has been added (as in if you'd read and understood what this code does, you'd have removed it), with an LLM comment left in above it AND at the end of the line, where it feels like I'm the first person to have tried to read and understand the code, and I feel like asking open-ended questions like "Why was this line added?" to get you to actually read and think about what's supposed to be your code, rather than a review comment explaining why it's not needed acting as a direct conduit from me to your LLM's "You're absolutely right!" response.
Think of it as a dynamic opinion poll -- the probabilistic take on this thing is such and such.
As a bonus you can prime the respondent's persona.
// After posting, I see another comment at bottom opening with "Counterpoint:"... Different point though.
Wait no, if your boss is making you use this tech, then its his fault.
Wait no, if the companies are selling this as the holy grail of firing everyone to save money, its the fault of whomever is buying this trash without testing it
Wait no, if this is trained on the entirety of human information, then we are all wrong for not deleting all of our bad answers written on the internet
I know ChatGPT exists. I could have fucking copied-and-pasted my question myself. I'm not asking you to be the interface between me and it. I'm asking you, what you think, what your thoughts and opinions are.
No one I know who says this kind of thing would read this article. People love being lazy.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.
I wrote before about just sending me the prompt[0], but if your prompt is literally my code then I don't need you at all.
----
In many of the Hacker News comments, a core complaint was not just that AI is sometimes used lazily, but that LLM outputs are fundamentally unreliable—that they generate confidently stated nonsense (hallucinations, bullshit in the Frankfurtian philosophical sense: speech unconcerned with truth).
Here’s a more explicitly framed summary of that sentiment:
⸻
Central Critique: AI as a Bullshit Generator
Many commenters argue that: • LLMs don’t “know” things—they generate plausible language based on patterns, not truth. • Therefore, any use of them without rigorous verification is inherently flawed. • Even when they produce correct answers, users can’t trust them without external confirmation, which defeats many of the supposed productivity gains. • Some assert that AI output should be treated not as knowledge but as an unreliable guess-machine.
Examples of the underlying sentiment: • “LLMs produce bullshit that looks authoritative, and people post it without doing the work to separate truth from hallucination.” • “It costs almost nothing to generate plausible nonsense now, and that cheapness is actively polluting technical discourse.” • “‘I asked ChatGPT’ is not a disclaimer; it’s an admission that you didn’t verify anything.”
⸻
Philosophical framing (which commenters alluded to)
A few participants referenced Harry Frankfurt’s definition of bullshit: • The bullshitter’s goal isn’t to lie (which requires knowledge of the truth), but simply to produce something that sounds right. • Many commenters argue LLMs embody this: they’re indifferent to truth, tailored to maximize coherence, authority, and user satisfaction.
This wasn’t a side issue—it was a core rejection of uncritical AI use.
⸻
So to clarify: the strong anti-AI sentiment isn’t just about laziness.
It’s about: • Epistemic corruption: degrading the reliability of discourse. • False confidence: turning uncertainty into authoritative prose. • Pollution of knowledge spaces: burying truth under fluent fabrication.