https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(This is a broader restriction than the one you're looking for).
It's important to understand that not all of the rules of HN are on the Guidelines page. We're a common law system; think of the Guidelines as something akin to a constitution. Dan and Tom's moderation comments form the "judicial precedent" of the site; you'll find things in there like "no Internet psychiatric diagnosis" and "not owing $publicfigure anything but owing this community more" and "no nationalist flamewar" and "no hijacking other people's Show HN threads to promote your own thing". None of those are on the Guidelines page either, but they're definitely in the guidelines here.
The pre-LLM equivalent would be: "I googled this, and here's what the first result says," and copying the text without providing any additional commentary.
Everyone should be free to read, interpret and formulate their comments however they'd like.
But if a person outsources their entire thinking to an LLM/AI, they don't have anything to contribute to the conversation themselves.
And if the HN community wanted pure LLM/AI comments, they'd introduce such bots in the threads.
1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not
On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.
I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.
That said, I've also grown exceedingly tired of everyone saying, "I see an em dash, therefore that comment must have come from AI!"
I happen to like em dashes. They're easy to type on macOS, and they're useful in helping me express what I'm thinking—even if I might be using them incorrectly.
We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.
https://news.ycombinator.com/item?id=36735275
Just curious if chatGPT is actually formally banned on HN?
Hacker News <hn@ycombinator.com> Sat, Jul 15, 2023, 4:12 PM to me
Yes, they're banned. I don't know about "formally" because that word can mean different things and a lot of the practice of HN is informal. But we've definitely never allowed bots or generated comments. Here are some old posts referring to that.
dang
https://news.ycombinator.com/item?id=35984470 (May 2023) https://news.ycombinator.com/item?id=35869698 (May 2023) https://news.ycombinator.com/item?id=35210503 (March 2023) https://news.ycombinator.com/item?id=35206303 (March 2023) https://news.ycombinator.com/item?id=33950747 (Dec 2022) https://news.ycombinator.com/item?id=33911426 (Dec 2022) https://news.ycombinator.com/item?id=32571890 (Aug 2022) https://news.ycombinator.com/item?id=27558392 (June 2021) https://news.ycombinator.com/item?id=26693590 (April 2021) https://news.ycombinator.com/item?id=22744611 (April 2020) https://news.ycombinator.com/item?id=22427782 (Feb 2020) https://news.ycombinator.com/item?id=21774797 (Dec 2019) https://news.ycombinator.com/item?id=19325914 (March 2019)"
(Edit: oh, it's not 2024 anymore. How time flies!)
Small exception if the user is actually talking about AI, and quoting some AI output to illustrate their point, in which case the AI output should be a very small section of the post as a whole.
Though this is unlikely a scenario that happened, I’d equate this with someone asking me what I thought about something, and me walking them over to a book on the shelf to show them what that author thought. It’s just an aggregated and watered-down average of all the books.
I’d rather hear it filtered through a brain, be it a good answer or bad.
Saying “ChatGPT told me …” is a fast track to getting your input dismissed on our team. That phrasing shifts accountability from you to the AI. If we really wanted advice straight from the model, we wouldn’t need a human in the loop - we’d ask it ourselves.
Sure, I'll occasionally ask an LLM about something if the info is easy to verify after, but I wouldn't like comments here that were just copy-pastes of the Google search results page either.
Copying and pasting from chatGPT is no more contributing to discussion than it would be if you pasted the question into Google and submitted the result.
Everyone here knows how to look up an answer in Google. Everyone here knows how to look up an answer in ChatGPT.
If anyone wanted a Google result for a chat, GPT result, they would have just done that.
1) borderline. Potentially provides some benefit to the thread for readers who also don't have time or expertise to read an 83 page paper. Although it would require someone to acknowledge and agree that the summary is sound.
2) Acceptable. Dude got grok to make some cool visuals that otherwise wouldn't exist. I don't see what the issue is with something like this.
3) borderline. Same as 1 mostly.
The more I think about this, the less bothered I am by it. If the problem were someone jumping into a conversation they know nothing about, and giving an opinion that is actually just the output of an LLM, I'd agree. But all the examples you provided are transformative in some way. Either summarizing and simplifying a long article or paper, or creating art.
I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.
I don't recall any instances where I've run into the problem here, maybe because I tend to arrive to threads as a result of them being popular (listed on Google News) which means I'm only going to read the top 10-50 posts. I read human responses for a bit before deciding if I should continue reading, and that's the same system I use for LLMs because sometimes I can't tell just by the formatting; if it's good, it's good - if it's bad, it's bad -- I don't care if a chicken with syphilis wrote it.
The HN guidelines haven't yet been updated but perhaps if enough people send an email to the moderators, they'll do it.
"A guideline to refrain" seems better. Basically, this should be only slightly more tolerated than "let me google for you" replies: maybe not actively harmful, but rude. But, anyway, let's not be overly pretentious: who even reads all these guidelines (or rules for that matter)? Also, it is quite apparent, that the audience of HN is on average much less technical and "nerdy" than it was, say, 10 years ago, so, I guess, expect these answers to continue for quite some time and just deal with it.
But this is a text-only forum and text (to a degree, all digital content) has become compromised. Intent and message is not attributable to real life experience or effort. For the moment I have accepted the additional overhead.
As with most, I have a habit of estimating the validity of expertise in comments, and experiential biases, but that is becoming untenable.
Perhaps there will soon be transformer features that produce prompts adequate to the task of reproducing the thought behind each thread, so their actual value, informational complexity, humor, and salience, may be compared?
Though many obviously human commentors are actually inferior to answers from “let me chatgpt that for you.”
I have had healthy suspicions for a while now.
I can't locate them, but I'm sure they exist...
In many threads, those comments can be just as annoying and distracting as the ones being replied to.
I say this as someone who to my recollection has never had anyone reply with a rule correction to me -- but I've seen so many of them over the years and I feel like we would fill up the screen even more with a rule like this.
So no, I don't think forbidding anything helps. Let things fall where they should, otherwise.
If anything, it had been quite customary to supply references for some important facts. Thus letting readers to explore further and interpret the facts.
With AI in the mix the references become even more important, in the view of hallucinations and fact poisoning.
Otherwise, it's a forum. Voting, flagging, ignoring are the usual tools.
Saying "I asked AI" usually falls into the former category, unless the discussion is specifically about analyzing AI-generated responses.
People already post plenty of non-substantive comments regardless of whether AI is involved, so the focus should be on whether the remark contributes any meaningful value to the discourse, not on the tools used to prepare it.
This should be restated: Should people stop admitting to AI usage out of shame, and start pretending to be actual experts or doing research on their own when they really aren't?
Be careful what you wish for.
I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?
At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
I think making it a “rule” just encourages people to use AI and not acknowledge its use.
Someone below mentions using it for translation and I think that's OK.
Idea: Prevent LLM copy/pasting by preempting it. Google and other things display LLM summaries of what you search for after you enter your search query, and that's frequently annoying.
So imagine the same on an HN post. In a clearly delineated and collapsible box underneath or beside the post. It is also annoying, but it also removes the incentive to run the question through an LLM and post the output, because it was already done.
Some people will know how to use it in good taste, others will try to abuse it in bad taste.
It might not be universally agreed which is which in every case.
Please don’t pollute responses with made-up machine generated time-wasting bits here…!!!
For instance, what's wrong with the following: "Here's interesting point about foo topic. Here's another interesting point about bar topic; I learned of this through use of Gemini. Here's another interesting point about baz topic."
Is this banned also? I'm only sharing it because I feel that I've vetted whatever I learned and find it worth sharing regardless of the source.
If the discussion itself is about AI then what it produces is obviously relevant. If it's about something else, nobody needs you to copy and paste for them.
2026 is great year to watch out for typos. Typos are real humans
I am a human and more than half of what I write here is rejected.
I say bring on the AI. We are full of gatekeeping assholes, but we definitely have never cared if you have a heart (literally and figuratively).
But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.
I think just downvoting by committed users is enough. What matters is the content and how valuable it seems to readers. There is no need to do any gate keeping by the guidelines on this matter. That’s my opinion.
If you didn’t think it, and you didn’t write it, it doesn’t belong here.
Is the content of the comment counter-productive? Downvote it.
I could see cases where large walls of text that are generally useless should be downvoted or even removed. AI or not. But, the first example
> faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?
to be frank, is a service to all HN readers. Yes it is possible that a few of us would benefit from sitting down with a nice cup of coffee, putting on some ambient music and taking in 74 pages of... whatever this is. But, faced with far more interesting and useful content than I could possibly consume all day every day, having a summary to inform my time investment is of great value to me. Even If It Is Imperfect
There are far too many replies in this thread saying to drop the ban hammer, for this to be seriously taken as Hacker News. What has happened to this audience?
(A) Reticule the AI for giving a dumb answer.
(B) Point out how obvious something is.
Agreed. It's hard enough dealing with the endless stream of LLM marketing stories, please lets at least try to keep the comments a little free of this 'I asked...' marketing spam.
Doing this will lead to people using AI without mentioning it, making it even harder to parse between human-origin content.
Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”
Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”
And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”
There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.
I asked Perplexity, and Perplexity said: ""Your metaphysical intuition is very much in line with live debates: once “small pebbles” are arranged into agents that talk, coordinate, and co-shape our world, there is a strong philosophical case that they should be brought inside our moral and political conversations rather than excluded by fiat.""
Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.
Also, if you forbid people to tell you they consulted AI, they will just not say that.
(source: ChatGPT)
The question was something like: “how reliable is the science behind misinformation.” And it said something like: “quality level is very poor and far below what justifies current public discourse.”
I ask for a specific article backing this up, and it’s saying “there isn’t any one article, I just analyzed the existing literature and it stinks.”
This matters quite a bit. X - formerly Twitter - is being fined for refusing to make its data available for misinformation research.
I’m trying to get it to give me a non-AI source, but it’s saying it doesn’t exist.
If this is true - it’s pretty important- and something worth discussing. But it doesn’t seem supportable outside the context of “my AI said.”
IMO hiding such content is the job of an extension.
When I do "here's what chatgpt has to say" it's usually because I'm pretty confident of a thing, but I have no idea what the original source was, but I'm not going to invest much time in resurrecting the original trail back to where I first learned a thing. I'm not going to spend 60 minutes to properly source a HN comment, it's just not the level of discussion I'm willing to have though many of the community seem to require an academic level of investment.
Then we can just filter it at the browser level.
In fact why don't we have glyphs for it? Like special quote characters.
I'm not sure making a rule would be helpful though, as I think people would ignore it and just not label the source of their comment. I'd like to be wrong about that.
Someone once put it as, "sharing your LLM conversations with others is as interesting to them as narrating the details of your dreams", which I find eerily accurate.
We are here in this human space in the pursuit of learning, edification, debate, and (hopefully) truth.
There is a qualitative difference between the unreliability of pseudonymous humans here vs the unreliability of LLM output.
And it is the same qualitative difference that makes it interesting to have some random poster share their (potentially incorrect) factual understanding, and uninteresting if the same person said "look, I have no idea, but in a dream last night it seemed to me that..."
It’s the HN equivalent to “@grok is this true?”, but worse
Should we allow 'let me google that for you' responses?
Plus if you ban it people will just remove the "AI said" part, post it as is without reading and now you're engaging with an AI without even the courtesy of knowing. That seems even worse
Longer ...
I am here for the interesting conversations and polite debate.
In principle I have no issues with either citing AI responses in much the same way we do with any other source. Or with individual's prompting AI's to generate interesting responses on their behalf. When done well I believe it can improve discourse.
Practically though, we know that the volume of content AI's can generate tends to overwhelm human based moderation and review systems. I like the signal to noise ratio as it is; so from my pov I'd be in favour of a cautious approach with a temporary guidelines against it's usage until we are sure we have the moderation tools to preserve that quality.
Why introduce an unnecessary and ineffective regulation.
I actually kind of find it surprising that this post and the top comments saying "yes" even exist because I think the answer should be so firmly "no", but I'll explain what I like to post elsewhere using AI (edit: and some reasons why I think LLM output is useful):
1. A unique human made prompt
2. AI output, designated as "AI says:". This saves you tokens and time copying and pasting over to get the output yourself, and it's really just to give you more info that you could argue for or against in the conversation (adds a lot of "value" to consider to the conversation).
3. Usually I do some manual skimming and trimming of the AI output to make sure it's saying something I'd like to share; just like I don't purely "vibe code" but usually kind of skim output to make sure it's not doing something "extremely bad". The "AI says:" disclaimer makes clear that I may have missed something, but usually there's useful information in the output that is probably better or less time consuming than doing lots of manual research. It's literally like citing Wikipedia or a web search and encouraging you to cross-check the info if it sounds questionable, but the info is good enough most of the time such that it seems valuable to share it.
Other points:
A. The AI-generated answers are just so good... it feels akin to people here not using AI to program (while I see a lot of posts posting otherwise that they have had a lot of positive experiences with using AI to program). It's really the same kind of idea. I think the key is in "unique prompts", that's the human element in the discussion. Essentially I am sharing "tweets" (microblogs) and then AI-generated essays about the topic (so maybe I have a different perspective on why I think this is totally acceptable, as you can always just scroll past AI output if it's labeled as such?). Maybe it makes more sense in context to me? Even for this post, you could have asked an AI "what are the pros and cons of allowing people to use LLM output to make comments" (a unique human prompt to add to the conversation) and then pasted AI output for people to consider the pros and cons of allowing such comments, and I'd anticipate doing this would generate a "pretty good essay to read".
B. This is kind of like in schools, AI is probably going to force them to adapt somehow because you could just add to a prompt to "respond in such a way as to be less detectable to a human" or something like that. At some point it's impossible to tell if someone is "cheating" in school or posting LLM output on to the comments here. But you don't need to despair because what's ultimately important on forum comments is that the information is useful, and if LLM output is useful then it will be upvoted. (In other concerning news related to this, I'm pretty sure they're working on how to generate forum posts and comments without humans being involved at all!)
So I guess for me the conversation is more how to handle LLM output and maybe for people to learn how to comment or post with AI assistance (much like people are learning to code with AI assistance), rather than to totally ban it (which to me seems very counter-productive).
edit: (100% human post btw!)
If someone thinks an "I asked $AI, and it said" comment is bad, then they can downvote it.
As an aside, at times it may be insightful or curious to see what an AI actually says...
Of course I prefer to read the thoughts of an actual human on here, but I don't think it makes sense to update the guidelines. Eventually the guidelines would get so long and tedious that no one would pay attention to them and they'd stop working altogether.
(did I include the non-word forbiddance to emphasize the point that a human––not a robot––wrote this comment? Yes, yes I did.)
If it’s a low effort copy pasta post I think downvotes are sufficient unless it starts to obliterate the signal vs noise ratio on the site.
Yes, if you wanted to ask an llm, you’d do so, but someone else asks a specific question to the llm, and generates an answer that’s specific to his question. And that might add value to the discussion.
I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (https://rfd.shared.oxide.computer/rfd/0576). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.
Of course, if it’s banned maybe people just stop admitting it.
I am blown away by LLMs - now using ChatGPT to help me write some python scripts in seconds, minutes, that used to take me hours, weeks.
Yet, when I ask a question, or wish to discuss something on here, I do it because I want input from another meatbag in the hacker news collective.
I don’t want some corporate BS.
Thank you for your attention on this matte.r
So I think maybe the guidelines should say something like:
HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.
---------
Also I asked ChatGPT and it said:
Short Answer
HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.
A rule probably isn’t needed. A norm is.
Low effort LLM crap is bad.
Flame bait uncurious mob pile-ons (this thread) are also bad.
Use the downvote button.
with features:
- ability to hide AI labeled replies (by default)
- assign lower weight when appropriate
- if a user is suspected to be AI-generated, retroactively label all their replies as "suspected AI"
- in addition to downvote/upvote, a "I think this is AI" counter
It’s the same as “this” of “wut” but much longer.
If you’re posting that and ANALYZING the output that’s different. That could be useful. You added something there.
Edit: I'm happy to add two related categories to that too - telling someone to "ask ChatGPT" or "Google it" is a similar level offense.
I've been seeing more and more of these on the front page lately.
Since that isn't likely to happen, perhaps the community can develop a browser extension that calls attention to or suppresses such accounts.
large LLM-generated texts just get in the way of reading real text from real humans
In terms of reasons for platform-level censorship, "I have to scroll sometimes" seems like a bad one.> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.
For obvious(?) reasons I won't point to some recent comments that I suspect, but they were kind and gentle in the way that Opus 4.5 can be at times; encouraging humans to be good with each other.
I think the rules should be similar to bot rules I saw on wikipedia. It ought to be ok to USE an AI in the process of making a comment, but the comment needs to be 'owned' by the human/the account posting it.
Eg. if it's a helpful comment, it should be upvoted. If it's not helpful, downvoted; and with a little luck people will be encouraged/discouraged from using AI in inappropriate ways.
"I asked gemini, and gemini said..." is probably the wrong format, if it's otherwise (un)useful, just vote it accordingly?
AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.
This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.
What AI regurgitates about about a topic is often more interesting and fact/data-based than the emotionally-driven human pessimists spewing constant cynicism on HN, so in fact I much prefer having more rational AI responses added in as context within a conversation.
HN is not actually a democracy. The rules are not voted on. They are set by the people who own and run HN.
Please tell me what you think those people think of this question.
"Banning" the comment syntax would merely ban the form of notification. People are going to look stuff up with an LLM. It's 2025; that's what we do instead of search these days. Just like we used to comment "Well Google says..." or "According to Alta Vista..."
Proscribing quoting an LLM is a losing proposition. Commenters will just omit disclosure.
I'd lean toward officially ignoring it, or alternatively ask that disclosure take on less conversational form. For example, use quote syntax and cite the LLM. e.g.:
> Blah blah slop slop slop
-- ChatGippity
Obligatory xkcd https://xkcd.com/810/
This is new territory, you don't ban it, you adapt with it.
I for one would love to have summary executions for anyone who says that Hello-Fellow-Kids cringe pushed on us by middle-aged squares: "vibe"
1. Paid marketing (tech stacks, political hackery, Rust evangelism) 2. Some sociopath talking his own book 3. Someone who spouts off about things he doesn’t know about (see: this post’s author)
The internet of real people died decades ago and we can only wander in the polished megalithic ruins of that enlightened age.
I find myself downvoting (flagging) them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.
Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360
But most of the time it’s like they were bothered that I asked and copy paste what an AI said.
Pretty easy. Just add their name to my “GFY” list and move on in my life.