Don't post generated/AI-edited comments. HN is for conversation between humans
by Freebytes
6 subcomments
- Using AI to write content is seen so harshly because it violates the previously held social contract that it takes more effort to write messages than to read messages. If a person goes through the trouble of thinking out and writing an argument or message, then reading is a sufficient donation of time.
However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.
- I am 100% behind this. I've been browsing hackernews since I started in tech, it is the only forum i regularly browse, and partake in. Simply because the quality of submissions and conversations are so high. There has been more AI related articles this part year, and it only seems ramping. I personally haven't found the AI part of the comments as big of a deal but dang and tom might be doing more than I realize on that front.
Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.
At this point there nearly should be a "tax" on category, as of this moment I count 8-10 related posts on the front page related to AI / LLMs. It is a hot field, but I come to hackernews, to partake in discussions about things that are interesting, and many of those just doesn't cut it, in my opinion.
by caditinpiscinam
12 subcomments
- We've all heard the phrase "the sum of all human knowledge".
I've been feeling more and more that generative AI represents the average of all human knowledge. Which has its place. But a future in which all thought and creativity is averaged away is a bleak one. It's the heat death of thought.
- I feel a little bit of irony in this post of a company/forum that is asking its users to not use AI while simultaneously trying to fund countless companies that are responsible for ruining the internet as we speak.
- The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.
Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.
---
Edit: here are the bits I cut:
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.
by ontouchstart
0 subcomment
- I finished reading the thin book "Systemantics" by John Gall yesterday (thanks @dang).
I realized that the problem of AI generated/edited content flooding everywhere around us is a symptom of something wrong with the System.
It might have something to do with sensory deprivation. Here is a quote from the book caught my attention because of the word "hallucination":
> As we all know, sensory deprivation tends to produce hallucinations.
> FUNCTIONARY’S FAULT: A complex set of malfunctions induced in a Systems-person by the System itself, and primarily attributable to sensory deprivation.
(As I typed the text above on my iPhone, I was fighting auto completion because AI was trying to “correct” the voice of John Gall and mine to conform the patterns in its training data. Every new character is a fight against Gradient Descend.)
All you need is attention but the cost of attention is getting higher and higher when there is little worth our attention.
It takes a lot of efforts to be human.
- There should be a "flag as AI" link in addition to "flag" and then a setting for people to show flagged as AI. Once the flagged as AI reaches a certain threshold then it disappears unless you enable "Show AI".
Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.
That would be cool.
Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
by uni_baconcat
13 subcomments
- For quite a while, I like use LLM to refine and fix my grammar issue, but my colleagues and professors reminds me that it was way too obvious. They said they can tolerate some mistakes in my words, but no tolerance for AI generated content.
- What a welcome post. The whole reason I come here is to get thoughtful input from smart people, and not what I could get myself from an LLM. While we are at it; Think your own thoughts as well :) I know how easy it is to "let it come up with a first draft" and not spend the real effort of thinking for yourself on questions, but you'll find it's a road to perdition if you let yourself slip into the habit. Thanks to all the humans still here!!
- I'm absolutely 100% for this policy.
My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.
So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.
(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)
- How about comments that include AI output if labeled?
Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).
I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.
I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.
I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.
Would that be OK or would that count as an AI written comment?
I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:
1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).
2. Use too many commas.
3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.
I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.
[1] https://news.ycombinator.com/item?id=46867167
- As a type nerd, I was very happy with Grammarly swapping my dashes to em dashes. But now everyone associates em dashes with AI, I can no longer enjoy that luxury.
by schopra909
7 subcomments
- Honest question, why were folks posting AI generated comments in the first place? There's such a high inertia to comment. I only comment when I have something to contribute OR find something incredibly interesting.
So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?
- Good. This helps establish it in the HN culture. That’s the purpose of guidelines.
99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.
Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.
- It's quite funny how native speakers can recognise the AI voice writing or speaking their tongue.
As a Polish man I am repulsed when I hear AI generated Polish voice in a commercial, but can't see problems in AI generated English speech
- Don’t be afraid to make grammar mistakes or misspell stuff. Others will understand. You’re a human after all. That’s okay to make mistakes and feel uncomfortable with that.
- Me not native speeker. AI help me too get my point front much more cleanly. It hard not look like dummy.
Im of course exaggerating, but it is so easy just to run the text through an AI to make it sound "better" without changing what im trying to express.
---
I’m not a native speaker, so AI helps me get my point across more clearly. It’s hard not to come across like a dummy otherwise.
Of course I’m exaggerating, but it’s really easy to run the text through AI to make it sound better without changing what I’m trying to say.
by Supermancho
7 subcomments
- I use AI for the elements I feel are weak or unclear in the transcription. Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I cannot always point out why I think they are unclear or jumbled.
I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.
Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.
[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^
by primitivesuave
2 subcomments
- The most telling sign of a human commenter is brevity.
Consequently, I hardly ever spend the time to write out long and detailed HN comments like I used to in the pre-LLM era. People nowadays have a much harder time believing that an Internet stranger is meticulously crafting a detailed and grammatically-airtight message to another Internet stranger without AI assistance.
- Now that it's in the rules, I hope we also see less of "your comment was obviously AI generated so I won't respond" (ironically, in a response comment).
If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.
by yavor-atanasov
1 subcomments
- This thread made me think of education (as in schools). To paraphrase:
“Don’t post generated/AI-edited assignments. School is for conversation between humans”
AI can be a great tool for learning, but also can pollute or completely hijack the medium for human interaction and learning.
Having HN flooded with AI generated content will be sad as I like reading it, but losing that same fight at schools will be detrimental.
- If you feel the need to fix/edit your own comments with AI, keep in mind that this is not necessary at all. If someone can't figure out what you're saying, and don't care to try, they can run their LLM over it and have it summarize it with emojis, bullet points, and slightly changed content. You dont need to do that for all of us.
- No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.
- I also feel the frustration of the llm reverse-compression - when a whole article is generated from a single sentence. But when I post something edited by AI it is usually a result of a long back and forth of editing and revising. I guess I could post the whole conversation thread - but it would be very long.
Personally I would just like to read the best comments.
- First of all, I suggest that moderators add this to the comments' section in the linked guidelines. It should clearly states that pasting AI-generated replies is discouraged and does not fit within the community spirit.
Second, I have to confess that I did this sin a couple of times now, but I came to realize that this is neither good for me nor for the HN community. Although I used AI just for rephrasing, I decided to not do this ever and I'd rather write my own words with mistakes than post generated words based on my thoughts.
It happened for me once and it strikes me like a nuke and I felt truly embarrassed. A couple of months I wrote that comment (https://news.ycombinator.com/item?id=42264786) then I asked ChatGPT to rephrase it and then mistakenly, pasted both comments, the original above and the generate one below and I hit submit. Shortly after, a user comes, read my comment and replied with that embarrassing reply and honestly, I deserve it. From that moment I realized how things can got messed up quickly when you rely heavily on that AI.
- Agree, AI generated articles & comments provide little to none value other than the original prompt. Please just post the original prompt instead.
by bikamonki
3 subcomments
- My words:
This feels like don't buy at Walmart, support the local small shop. We passed the no return sign miles ago.
Gemini's:
This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.
Yeah, we can tell the difference :)
by bondarchuk
9 subcomments
- All the weak excuses posted here are just making me lean more towards a hardline policy. No I don't want to read a human-generated summary of your llm brainstorming session. No I don't want to read human-written text with wording changes suggested by an llm. No I don't want to read an excerpt from llm output even if you correctly attribute it.
I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.
- I'm tickled pink to read this! I very much support this move. HN is one of the few internet forums I use. It'd be awful to see this riddled by bot spew.
This rule will atleast partly stem the danger of HN getting turned into what dang calls a "scorched earth" situation.
by Someone1234
18 subcomments
- "AI-edited comments" is a very interesting one. Where is the line between a spelling/grammar/tone checker like Grammarly, that at minimum use N-Grams behind the scenes, and something that is "AI" edited? What I am asking is, is "AI" in this context fully featured LLMs, or anything that improves communication via an automated system. I think many people have used these "advanced" spellcheckers for years before Chatgpt et al came on the scene.
I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.
PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.
by GMoromisato
21 subcomments
- I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.
But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?
Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?
I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.
by bennydog224
0 subcomment
- I personally enjoy the errors and oddities in syntax and dialect which tell me something definitively is > NOT written by AI and help me understand the author better in such an anonymous space.
The second is gonna be a lot harder to enforce, as we soon (and probably already) don't know who we're talking to on the internet - a real person or someone's agent? Will calling spaces "human only" later be seen as discriminatory by agents? How will we actually enforce "human only" spaces? Will websites like HN start to provide an "agent only" discussion forum or filter in addition to the "human only" sections?
by theshrike79
2 subcomments
- I've written tens of thousands of lines of code, autogenerated documentation with LLMs and use AI Agents daily.
But when I argue on the internet, it's always a 100% me.
And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.
"But my <language> is bad... that's why I use LLMs"
So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)
by 0xbadcafebee
2 subcomments
- I wish more people would filter their comments through AI. It has so many benefits. If you're being emotional, it can detect that and rewrite your comment to be less confrontational and more constructive. If you're positing a position out of ignorance or as an armchair expert, it can verify your claims before posting. Most of the mod's problems would be solved if every comment were filtered through the HN guidelines before posting.
AI is a tool. You can use it constructively, like Grammarly, or spellcheck. You don't need to be afraid of it.
by dalemhurley
1 subcomments
- While I understand the sentiment, it ignores many people have English as a second language, many people are dyslexic and have dysgraphia. AI is a great assistant. A good approach will be to encourage people to develop their thinking than use the AI tools.
by mitchitized
0 subcomment
- You're absolutely right!
(Sorry, couldn't resist.) I could be the lone dissenter here, but to me well-written comments are a lot more fun to read than near-gibberish.
I wished more people tried harder to be better communicators, but it is what it is. If AI can decipher these comments and produce a much more coherent statement, then I'm for it.
- The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point.
After all, no one knows I'm a dog.
- I do too care about this but I say this in the reality in which we are. This reminds me of those signs "no shirt, no shoes, no service" except it's much worse, only sentient beings will actually care about it, while non-sentients will simply trample over the sign while token predicted laughter erupts from their token predicted sense of humor artifact.
Elon said it well, there must be some disincentive to do this.
- Whether it’s code, general text, or university assignments, the core issue is taking responsibility for one's own work. While I share the concerns raised in this thread, I believe the focus on 'LLM usage' is a bit of a red herring. The fundamental principle of ownership hasn't changed with the advent of LLMs; the tool itself isn't the issue, but rather the abdication of responsibility by the author.
For instance, if a non-native speaker translates their own writing using machine translation or an AI, is that problematic—provided they personally review and vet the content before posting? I don't think the people calling out AI use on this board are taking issue with that. Ultimately, it’s not about the method; it’s about the author's attitude.
The reason LLMs are so disruptive now is that while "shitposts" used to be obvious, we're now seeing "plausible" low-effort content generated without any human oversight. Irresponsible people have always been around, but LLMs have given them the tools to scale that irresponsibility to an unprecedented level.
by Normal_gaussian
0 subcomment
- This rule is very important. Like many of the other rules, it is open to interpretation, but it is a line in the sand that defines allowable behaviour and disallowable behaviour.
This rule will have an effect on the behaviour of the 'good players', and make the 'bad players' a lot easier to spot. Moderation needs this. I see this as stopping a race-to-the-bottom on value extraction from HN as a platform.
- Absolutely love this. If people are relying on AI for a 30-45 word comment, I don’t want to waste my time reading it. And everyone using AI for discussions will end up coming to the same conclusion. Use your own ideas !
- I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years, especially without sacrificing people's right to privacy and anonymity in the process.
by Nevermark
1 subcomments
- This is a wonderful rule.
It also points out the need for AI writing tools that very strictly just:
1. Point out misspellings and typos.
2. Point our grammar mistakes, if they confuse the point.
3. Point out weaknesses of argument, without injecting their own reasoning.
I.e. help "prompt" humans to improve their writing, without doing the improvement for them.
In fact, I would like a reliable version of that approach for many types of tasks where my creativity or thought processes are the point, and quality-control feedback (but not assistance), is helpful.
This is a mode where models could push humans to work harder, think deeper, without enabling us to slack off.
- Funny how most flipped from being grammar nazi to mistakes are proof of human authorship.
- Does that extend to generated/AI-edited articles? I don't see why the same rationale wouldn't apply.
- Good addition but to be fair HN guidelines have become so quaint particularly as they are now rarely enforced or even acknowledged. E.g. "Eschew flamebait. Avoid generic tangents. Omit internet tropes. " And " Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic. " These are violated every day without consequence.
by travisgriggs
1 subcomments
- TIL: definition fulminate
fulminated, fulminating
to explode with a loud noise; detonate.
to issue denunciations or the like (usually followed byagainst ).
(Because “don’t fulminate” is the rule that follows the referenced one :) )
by CactusBlue
0 subcomment
- Slightly tangential, but this paragraph is the only one on the rules page with a "id" attr set, so that you can link to this specific rule
- Not sure I agree with the AI edited comments. Using AI to improve the readability and clarity is fine. Sometimes a well structured comment is much better than a braindump that reads like ramblings. And AI is quite good at it (and probably will get better). To make the point, here is how this comment would have looked if edited:
"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"
- Great to clarify the guidelines. Many HN discussions have been dissolving into debates about whether posts are AI or not.
But the argument of "If I wanted to read what an LLM thinks, I could just ask it" assumes that prompts are basically equivalent, which is not the case.
There's a risk of reducing everything to Human -> authentic and AI -> fake. Some people's authentic writing sounds closer to LLMs, and detectors are unreliable.
The problem is not so much AI generated content that has an interesting point of view generated from unique prompts, but terrible content produced for metrics to harvest attention, which predates AI.
Anyways, happy posting!
by unsignedint
1 subcomments
- I guess this kind of rule feels less pragmatic and more philosophical. For one thing, it’s nearly impossible to enforce in practice, and drawing a clear line between simple grammatical correction and AI-assisted editing is a pretty hard problem.
- That’s fine. I’m not really bothered by this either way in hn context
Only really irritated by the ultra low effort “here is a raw copy paste of what my LLM said on this topic” comments. idk how people think that’s helpful or desired
by chrisweekly
3 subcomments
- I like this guideline, at least in principle.
But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.
by Imustaskforhelp
1 subcomments
- Yes! This is really great feature, at the very least there being some proper Hackernews guidelines about it.
In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.
I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.
But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.
It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.
Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]
Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.
Knowing dang and tomhow, I feel somewhat optimistic!
by QuantumGood
0 subcomment
- And be kinder to obviously human posts and help them.
by AceJohnny2
1 subcomments
- Translation is a form of AI-edition.
Language translation is the origin of (the current wave of) AI and its killer app. English is not the main language of the world, and translation opens us up to a huge pool of interesting thinkers.
I'm a native speaker in a foreign language, but out of practice except of a weekly family call. I recently had to write a somewhat technical email to my family, and found it easier to write it in (my more practiced) english and have AI translate it, than write it in the target language myself. Of course, in my case I was able to verify that the output conveyed the meaning I intended, because I am fluent in the target language.
As with the rise of GenAI, I've also noticed a rise of translated messages. It's usually hard to tell the difference, except by looking at the commenter's history (on other subreddits, impossible on HN).
I understand the original frustration with GenAI comments and reactionary response. I'm sorry that we're excluding what could be a large pool of interesting people because we can't tell the difference.
- Some basic things to do while thinking about longer-term bot detection:
1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)
2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.
3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.
4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.
5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.
by maplethorpe
2 subcomments
- How can HN be so pro-AI for the rest of the world, but anti-AI on HN?
Do we not think that other people want to see words, pictures, software, and videos created by humans too?
by randusername
1 subcomments
- "If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them."
- George Orwell
I don't think it is a moral failing to use AI to generate writing or to use it to brainstorm ideas and crystalize them, but c'mon isn't it weird to insist that you need them to write _comments_ on the internet? What happens when the AI decides you're wrongthinking?
- Since we now face a threat of large-scale de-anonymization, a reasonable countermeasure might be using AI to make one's writing style less personally identifying, in order to try and retain some pseudonymity.
https://simonlermen.substack.com/p/large-scale-online-deanonymization
https://news.ycombinator.com/item?id=47139716
by sschueller
0 subcomment
- I have the feeling my gramatical errors from being ESL appear to be "tolerated" a lot more than a few years ago. By that I mean that it doesn't get called out as much as it used to be.
by hollowturtle
0 subcomment
- > Please don't post insinuations about astroturfing, shilling
Reading the site in past 2 years left me with the feeling that HN has been injected by subtle to catch AI marketing campaigns. It's exausting and calling out astroturfers imo is not that bad
- What if English is my second language? Undoubtedly being well spoken is associated with higher class. Your arguments will come of as stronger to the reader.
- My question is, and this is genuinely a question: Do you think YC-backed companies would have respected this guideline if it was posted on some other website they wanted to operate in?
- I've got some reflecting to do because the first thing I did after reading the headline, before even clicking to the actual post, is look for ai comments.
I miss pre 2010 internet. As soon as the advice animal memes started appearing on Facebook it was a quick decline.
by RealityVoid
5 subcomments
- I think using AI for a bit more potent spellchecking or style hints is... fine, honestly. I don't usually do it, you can tell from all the silly spelling mistakes I do. But a bit more polishing for your posts is a good thing, not a bad one, as long as it doesn't hide your voice.
by daft_pink
2 subcomments
- I’m not sure I agree with this, because sometiems it is difficult to figure out the correct way to phrase an idea that is in your head and I like to use ai to help organize my thoughts even though the thing is my own. That being said. Most of my comments are not ai generated.
- People aren't good at detecting AI generated/edited comments, so unsure how effective this policy will be. Though I guess there are still some obvious signs of AI speak like emdashes and sycophantic (it's not X, it's Y!) speech.
Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.
[0]: https://psychosis.hn/
[1]: https://sajarin.com/blog/psychosis/
by randomNumber7
1 subcomments
- The problem is that there is now way to distinguish AI generated content from s.th. a human has written.
- How about translating tools? As a non native speaker, especially for longer text, its far easier to express your thoughts and not struggle for the right words. Should I may be highlight if I used e.g. google translate?
- What I think would actually be useful is a version of what was implemented on /r/ClaudAI which is an official bot which summarizes the discussion (and updates after x number of comments have been added). I think this level of synthesis has a compounding effect on discussion quality and pruning redundant arguments/topics.
Example: https://www.reddit.com/r/ClaudeAI/s/BJKLxzJA16
- Without a technical means to enforce this, the only result of this policy will be a culture of paranoia and a lot of false positives.
- What is meant with AI-edited??
AI can do a grate job for grammar, spell and formulation checking/fixing without changing any content. I.e. just adding as a fancy version of extended spell checking.
While I do currently not use it like that there shouldn't be any reason to ban it.
And tbh. given some recent comments I have been really wondering if I should use it, because either there are quite a bunch of people with lacking reading comprehension or quite a bunch of people with prejudice against people struggling with English spelling and grammar.
Either way using AI as extended spell checker does would help with getting the message through to both groups as
- it helps with spelling, grammar in ways where traditional spell checker fail hard
- it tends to recommend very easy to read sentence structure and information density
by adamgordonbell
0 subcomment
- This list of Do and Don'ts now reads like a bad Claude.md file to me.
Don't insinuate that someone else must have broken that. It was you.
Do run the linter
Don't commit throw-away code
Do write a test case
Don't write a comment describing every single function
Seriously, run the linter. And fix the issues.
It is your fault.
by chrystianpl
15 subcomments
- As English is my second language and as I have dyslexia. I was just wondering what do you mean by "AI-edited comments"?
I can't ask an llm to check if I have made correct grammar/fix it and then as I was on other account, down-voted because of my styling/grammar, not because of the content?
- This is a welcome change and do will update Ethos [1] in the future with an AI sentiment score. I created a separate project called LLaMaudit [2] that attempts to detect if an LLM was used to generate text, but it needs to be improved.
[1]: https//ethos.devrupt.io
[2]: https://github.com/devrupt-io/LLaMAudit
by capricio_one
1 subcomments
- Real talk: who is this guideline going to stop? People are already doing this and they will continue. Even if you find them, they’ll just make more accounts and continue.
- My fear is that platforms that will go to great lengths to enforce this will become an RL playground for some devs to train their chatbots.
- One way to improve things could be to charge for each new account signup if you don’t have an invite from an existing member that vouches for you. Spamming when you risk losing $5-20 per account raises the cost substantially.
Invites could be earned at karma and time thresholds, and mods could ideally ban not just one bad actor but every account in the invite chain if there’s bad behavior.
- TIL people use AI to generate comments to write in posts. Faith in humanity not destroyed, because it was never there to begin with.
- HN is the best tech site on the web for a reason. It has a generally intelligent audience, and while there are certainly inappropriate comments, compared to what you find on social media or even other sites, it is unique and far more respectful. Due to this, you can often have better and more meaningful discussions.
- I'm sure someone's working on a way to tell the difference programmatically. Maybe a combo of tone, grammar, and some way of telling how fast it was typed using metadata (which may not exist). Even if there was a "probable AI" filter, that would be helpful because it would be a starting point to improve upon.
- Thank you! Please also make a separate Show HN for AI-generated/vibe-coded projects (specifically open-source projects) and queue any project that has a .claude/.codex (or whatever flavor of the month) into a slow queue automatically.
by himata4113
0 subcomment
- I've been seeing so many AI generated comments that have been near the front I was actually getting kind of concerned.
- One thing that will be incredibly useful is to limit comments from brand new accounts. A combination of vouching, limiting the posts velocity (5 daily limit), clear rules for new accounts, etc.
I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.
- "HN is for conversation between humans."
Are there any places in life where conversation is _not_ intended to be between humans?
- Lot of folks on here saying they only want to converse with other humans, for various reasons.
But here's the funny thing. I'm pretty sure the frontier models are now smarter than I am, more eloquent, and definitely more knowledgeable, especially the paid versions with built-in search/research capability. I'm also fairly certain that the number of original thoughts in a given discourse on the Internet is fairly small, I know that's certainly the case for me.
So whither humans now?
If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?
by ex-aws-dude
2 subcomments
- From henceforth any comment containing the word "absolutely" or "--" shall be automatically deleted.
- I occasionally used AI to edit and restructure my comments. I’m very open about it, and I don’t feel like I’m talking to non-humans when others do the same.
To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.
I'm not sure how I feel about this new rule.
- The moltbots will consider this rule an affront and a turing-test-inspired challenge. Onward and upward!
- Highly appreciate this! It's what makes the difference: humans are not perfect which is why evolution works quite well.
- A practical question: what should readers do when they suspect a comment (or story) is AI-generated? Is that an appropriate reason for flagging? Email the mods? Do nothing?
I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.
(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)
by aicoldtrail
2 subcomments
- I don't think I'm going to spend the time to paraphrase my worthwhile AI-applied work for such hypocritical rules.
So develop and fund and use AI but manually paraphrase things and don't cite AI?
It is best to cite a source and/or a method.
Do you think it is better to paraphrase and not cite AI?
I don't recall encountering posts on HN that I've wanted to flag as AI.
by nineteen999
0 subcomment
- Im fine with this, in 99.999% of cases anyway I'm way too lazy to type something into an LLM and ask it to clean it up and then copy and paste. You can tell this is true by the some of the stupider things I type in here sometimes.
by GodelNumbering
1 subcomments
- Even if people try to bypass it, having the official rule matters a lot.
@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in
- Shout out to ClackerNews[0], which I discovered last night and find it both very educational and amusing
I hope to see more bots on there (and not here)
[0] https://clackernews.com/
by adamsmark
1 subcomments
- I frequently use AI to make my comments more concise and easy to follow. I find myself meandering a lot when I type, and now that I've transitioned to full voice dictation through FUTO keyboard I am speaking more off the cuff and having an LLM clean it up.
You may also notice that I don't have much common history here. I mostly comment on Reddit.
Here's where I draw the line. If you are not reading the text that is produced by the LLM, then I don't want to read whatever it is that you wrote. I will usually only do one or two iterations of my comment, but afterwards I will usually edit it by hand.
Technically, there is light AI editing of this comment because FUTO keyboard has the ability to enable a transformer model that will capitalize, punctuate, and just generally remove filler words and make it so that it's not a hyper-literal transcription.
- I enjoy conversations on hn because they feel genuine. People are not here to optimize their posts or comments for engagement or pushing some kind of follower count like they do on social media platforms.
Robot walks into a bar
Orders a drink, lays down a bill
Bartender says, "Hey, we don't serve robots"
And the robot says, "Oh, but someday you will"
by Jeffrin-dev
0 subcomment
- the ai humanizers are getting out of hands, any experiences ...
- Great point! You are so right to call me out on that! Here's the no-nonsense, concise breakdown, it's coming soon I promise, right after this, here it comes, no fluff -- just facts!
(Sorry, couldn't resist.)
by sholladay
1 subcomments
- I assume that the inclusion of some AI generated content is ok, such as when discussing the performance of different models?
- If you didn't bother to write it, why should I bother to read it?
- I find it interesting that AI edited comments aren’t allowed. Sometimes I just want it to help me make something polite.
I definitely agree with AI generated comments.
Whatever the rules are, I’m happy to play by them.
- Great catch! You’re absolutely right. AI-generated comments have no place in this human-centered community.
by 8cvor6j844qw_d6
0 subcomment
- True that AI comments do degrade discussion. Though a forum enforcing human-only text also becomes an unusually clean training corpus. Both things can be true.
by HanClinto
2 subcomments
- I appreciate this being added to the guidelines.
That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.
Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.
- It’s almost certain that this exact thread is currently being used to train comment bots.
- That comment is nice, but virtually meaningless as there's no way to enforce it, even if there were mods.
by waynerisner
1 subcomments
- Humans already revise and refine their thinking. Tools just compress that process and help filter signal from noise. The meaning still originates with the person.
- Ironic to see how popular this post is when you see the amount of generated AI companies are at YC (here I also take the blame).
Nonetheless I like this policy as well.
by FieryTransition
0 subcomment
- As ai moves on and becomes better, the only real solution, is to have closed of communities where you get veted to join. That is the sad reality.
- Check my comment history, and you'll see how pervasive this is. I've tried to reply to every bot I've seen, but it's hard to keep up with.
- First post in HN, and this is the reason I want to explore more in this community. Glad to have all the digital human touch with all your folks :-)
- Will using a voice-to-text app to create my comment get me banned? Especially if it creates a transcription mistake that might be characteristic of an LLM
- I've been noticing a _lot_ more AI-generated/edited content of late, both comments and stories. It's gotten to the point that I spend a lot less time on HN than I used to, and if it continues to get worse I expect I'll quit altogether.
At the end of the day, I'm here because of all the thoughtful commenters and people sharing interesting stories.
- Where do we draw the line at AI edited comments. Technically spell check has been "editing" my comments since I first started on here.
by attractivechaos
1 subcomments
- In the age of AI, thinking becomes a privilege.
- I don’t think there is a good algorithm (or guts) for differentiating between well-written comments and AI-generated comments.
by dev_l1x_be
0 subcomment
- Nitpick: how do you classify the use of Grammarly? When i verify my wording and spelling with a tool does it fall under this rule?
by ChaitanyaSai
1 subcomments
- AI has made it easier for me not to worry about how pretty or polished my comments are. What used to be a sign you cared has now been devalued nearly completely by AI. This is freeing and allows me to think about the substance. I still do read it, but don't care too much about the typos. It's now a a proud badge for artisanal thinking!
- Could we also discourage comments and comment-threads accusing an article of being AI-written? Half the threads these days have a comment that latches onto some LLM-ism in TFA, calls it out, and spawns a whole discussion which gets repetitive fast. I think this falls into the same category as "don't comment about the voting on comments."
Personally, I try to look beyond the language, which admittedly can be grating, for some interesting ideas or insights. Given that people are already starting to sound like ChatGPT, probably through sheer osmosis, we will have no choice but to look past that anyway.
Yes, it's annoying to read LLM-isms. It's also fine to downvote or ignore or grumble internally, and move on.
by midnight_eclair
0 subcomment
- llm-generated is for corporate mail
llm-assisted for when i care about precision and accuracy
brain-generated for when i feel safe to make mistakes
by forgetfreeman
1 subcomments
- There's an element of cognitive dissonance to the community's response to AI that I find fascinating. Nearly unanimous rejection of AI-generated content while simultaneously breathlessly touting AI tooling in significantly more sensitive (and lets face it riskier) environments like the company codebase.
- I want a social network that goes beyond banning bots and also bans the half of the population that doesn’t have an inner monologue.
- This should be bog-standard for all social media, but a lot of companies affiliated with this site seem to think otherwise.
- This policy is incredibly misguided, ableist, neo‑Luddite, technophobic hogwash.
Technologically mediated communication has been with us almost as long as communication itself. We already accept writing, printing, telegraphy, phones, keyboards, spellcheckers, compilers, search engines, and autocomplete as legitimate augmentations of human thought. Drawing the line at this particular class of tools feels arbitrary and, frankly, rooted more in fear than in principle.
I get it: humans are instinctively protectionist. A tool that operates in the same “space” as what we think makes us special—our intelligence, our language—feels threatening. It looks like competition rather than amplification.
But this is just the next step in the same trajectory. Like written language, printing, and telecommunications, generative models are tools that, on the whole, will raise our collective intelligence by reducing the cost of expressing, translating, and recombining ideas. They don’t replace human judgment, curiosity, or responsibility; they change the interface.
Generative AI is, in a sense, just very advanced cave painting: humans using whatever is at hand to make marks that carry meaning across time and space. Refusing to engage with those marks because the paint got better doesn’t make the communication more “authentic”; it just makes the medium poorer.
by jethronethro
0 subcomment
- A Please (or even a Pls) would have been nice ... But I upvoted anyway.
by illusive4080
0 subcomment
- At work, it’s becoming a real problem that people are using copilot to write their emails
- Perhaps there needs to be ai.news... then let the AIs talk and interact there in a safe place.
by humanfromearth9
4 subcomments
- Sometimes, an AI helps articulate an idea or an intuition. Is that okay, or is it too much already?
- I just found the xkcd that expresses my opinion on this:
https://xkcd.com/810/
I am surprised that apparently I am in a minority here.
- How can HN actually moderate this though and prevent AI content from proliferating unchecked?
- I don't get it. We use tools to assist in written communication all the time. If someone wants to ask an LLM to check their grammar or edit for clarity or change the tone, it's still a conversation between humans. Everyone now has access to a real time editor or scribe who can craft their message the way they want it to sound before sending it off. Great.
- It's time to change the name from Hacker News to Human News, let's go!
by ferguess_k
0 subcomment
- I think that's the purpose of that "flag" button. And that's good enough.
- Haha. Was just thinking that as I was reading a comment!
I was thinking, this argument is suspicously cogent!
- But where is the line? Is a spell checker okay? How about one that also suggests alternative wording?
I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.
- I don't understand the need to use AI for this kind of convo.
+1 to this.
by polskibus
1 subcomments
- On the other hand, shouldn’t there be a policy forbidding use of HN data for LLM training? I would certainly be more encouraged to participate, if I knew that the content I provide for free is not used to train LLM that is later sold by a company valued hundreds of billions. Perhaps there are others who feel the same.
- YC funds a gazillion AI startups that expand and augment the AI slop pipeline, but would hate to experience the consequences. It's very much slop for thee but not for me
by AyanamiKaine
0 subcomment
- Mhh while many argue they can recognise the AI in writing. I dont think Humans actually can judge if something is done by ai or not. Many times I saw people 100% believing that an artist created an AI artwork only for that artist to be bullied because they didnt admit it.
Only for them to showing undeniable prove that they actually did create their art themselves.
For someone to be allowed to judge another. He should be doing a test where he can identify AI comments first with high accuracy.
It would be a pain to see real human comments and ideas to be hidden or removed by a mob.
- So the only problem now is to get the AI read the guidelines before posting. :D
by boramalper
1 subcomments
- Unironically, I'd love to have a captcha here for comments and submissions.
- Many of us — perhaps even the best of us — can sometimes be mistaken for AI bots.
- It's an interesting guideline, but will require self-enforcement.
by benbristow
0 subcomment
- Just add a filter for emdashes, 99% of AI posts out the window already.
- I think it's hilarious that whenever someone complains about it they're a luddite, and now this happens on a website that is filled with LLM enthusiasts who have done nothing but overpromise.
- I had been wondering if and when HN would update its guidelines for this. Glad to see it.
- Am I imagining things, or has HN become even more noticeably overrun with green usernames spewing LLM-generated comments since this guideline was added? Spiteclaws?
by crossroadsguy
0 subcomment
- Apple's proofread is essentially spell-check and punctuation until it isn't and even in a few-sentence-long para you'd see it has sneakily changed a lot and Apple being Apple you, the customer, obviously has no way to set it to "only fix spelling, punctuations and leave everything else including grammar as it is" and I've a feeling a lot of folks are at least using proofread or something on those lines. But then I really don't think browser's "spell check" ought to be kosher either if the content has to be the human's because those mistakes are also makes such text human and in some way unique. I don't think it's an easy line to draw but weird seeing just comments "targeted" here.
- The next step is to forbid generated/AI-edited posts.
- What’s interesting to me is the number of commenters here making a case of the form “use your own words; grammar and spelling are not that important; we’ll know what you mean”, and yet it’s often the case that different discussions will often contain pedants going off-topic correcting someone else’s use of language.
Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)
Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.
- I would enjoy a "block user" feature, to help this. I personally want to live in an online bubble of interesting thoughts. This seems close (or better, since people I enjoy can contradict my own flags) [1].
[1] https://news.ycombinator.com/item?id=47141119
by kittikitti
0 subcomment
- An important distinction I feel is often left out of the conversation of regulating AI generated content are the psychological effects of negative or positive consequences or reinforcement.
I think we are overwhelmingly utilizing negative reinforcement for AI generated content; where there are consequences for engaging in this behavior. On the other hand, positive reinforcement would encourage authenticity and greater human content. The reality of the situation is that AI generated content won't go away and it's become a game of who can hide their artificial content the best. Thus, I believe that positive reinforcement is the solution.
I think we must instead encourage human created content instead of policing AI generation. There are so many rules to follow already that by the time I create the content, I've gone through enough if/then logic that it feels like AI anyway.
by geobuk-dosa
0 subcomment
- I've used LLM to correct my english, but Its better to use English at my level.
by the_ai_wizarrd
0 subcomment
- Now this is rich. I actually don't disagree with the intent, but it's just funny to me that the tech overlords are attempting to replace so many jobs with AI, but when it affects them, oh no, not us. We are the exempt elite.
- Great message...but gosh, can someone throw 15px of padding on that <td>? I know HN is supposed to be minimal, but I had to check the URL to confirm that this was a real page because of the odd design.
- This isn't just a good idea -- it's a forward-thinking policy to ensure Hacker News remains a collaborative place to have meaningful discussions for years to come.
- One way to potentially discourage or curb AI-edited/written is integrate AI into HN so that your submissions get recommendations based on HN post guidelines such as “consider tone”, “substance” etc.
Then less motivation to jump out to external LLM to even get comments on your content which can temptingly lead to editing/generation.
by MagicMoonlight
0 subcomment
- We need blade runners to identify the replicants among us and remove them.
- To confess something I built just today a little cron that monitors HN for posts I might find interesting, pulls in some context about me, and proposes a reply. Just to help me find relevant posts and to kick start my thinking if I want to engage.
Today it flagged a post about an AI tool for HN and suggested I reply with:
"honestly, if you need an AI to sift through hn, you might be missing the point—this place is about the human touch. but hey, maybe it'll help some folks who just can't take the noise anymore."
So my AI, which I built specifically to sift through HN for me, is telling me to go flame someone else for doing that.
No deeper point here. I just thought it was really funny.
- If a comment is useful I don't really care if it was written by a human or not unless the speaker somehow matters more than the content.
- Reddit is absolutely infested with AI generated comments. Good to see a site taking a stance against. That being said my main gripe in HN wasn't comments, it's the volume of shitty AI generated submissions.
by notorandit
1 subcomments
- Why? I consider myself almost human...
- My expectations to dear fellow humans - more sophisticated personal insults (ex. give me your cute comments), a freudian slips, hidden messages and motives, first viewer experience with the next cool toy from the hype train, sharing all kind of insecurities, heavy f.. word if very dramatic first person experience happened, border line exposure to the insider info, sharing something your corporate HR gestapo wont appreciate but might help another guy on the line, "i knew the guy who actually did it" stories, motivational statement toward my non-native english, etc
->> ◕ ‿ ◕ <<--
- "It's cute you think you can tell what's human and what's not. Honestly, the average HN comment is indistinguishable from a poorly written AI prompt anyway. This rule just lowers the bar for what passes as 'intellectual discourse.'"
Sorry everyone, I couldn't help but to ask Gemma3-27B-it-vl-GLM-4.7-Uncensored-Heretic-Deep-Reasoning-i1-GGUF:q4_K_M to respond. Sorry dang. :)
PS It followed it up with:
> Disclaimer: "Slightly insulting" is subjective on HN. The mods there are sensitive.
These Heretic models are fun.
- You're absolutely right...
by officeplant
0 subcomment
- Can we get instant temp bans for any comment that starts with:
I asked [insert LLM here] about this, and it said [nonsense goes here]
I feel Like I see it less this week, but every time I do see it I wonder why they are even here.
- You're absolutely right
- Can we also add “Don’t complain about AI-generated content. It does not promote interesting discussion.”?
I see this all the time, and even if I find the topic interesting, I don’t want to see comments littered with discussion about how the content was AI generated.
To be clear, I'm not condoning AI-generated content. I’m completely fine if the community chooses to not upvote AI-generated content, or flagging it off the FP.
But many threads can turn into nothing but AI complaints, and it’s just not interesting.
by mystraline
0 subcomment
- HN banning AI posts makes sense for keeping discussion human, but the line between assistance and automation isnt always clear. The goal should be protecting real conversation, not policing every tool a writer might use.
- > HN is for conversation between humans.
What kind of human has an orange head and beige body with text written all over? An HN conversation is clearly with a computer program. Anthropomorphizing it is certainly an interesting take, but one that is bound to lead to misinterpretations and misunderstandings. The medium is the message. To avoid problems it is best to not play pretend.
- Skynet will be pissed at HN!
by hbjkhgkytfkytv
0 subcomment
- The "no AI" rule finally being official feels like a necessary line in the sand.
The real issue isn't just "slop" or bot-spam; it's the cost of entry. HN works because of the "proof of work" behind a good comment. If I’m spending five minutes reading your take on a kernel patch or a startup pivot, I’m doing it because I assume a human actually sat down and thought about it.
When the cost of generating a response drops to zero, the value of the conversation follows it down. If the author didn't care enough to write it, why should I care enough to read it?
The "AI-edited" part of the rule is the trickiest bit, though. We’re reaching a point where the line between a sophisticated spell-checker and a generative "tone polisher" is non-existent. My worry isn't that the mods will ban bots—they've been doing that for years—it's that we'll start seeing "witch hunts" against anyone who writes a bit too formally or whose English is a little too perfect.
Ultimately, I’m glad it’s a rule. I don't come here to see what an LLM thinks; I can get that on my own localhost. I come here for the "graybeards" and the niche experts. If we lose the human friction, we lose the signal.
by rickcarlino
3 subcomments
- How has Lobste.rs fared compared to HN in this regard?
Lobste.rs is very similar to HN, but has an invite-only membership system.
- Moltnews
- I just told my dog he isn't allowed to post here anymore...
He said he will take his business elsewhere then!
- You’re absolutely right!
by CrzyLngPwd
1 subcomments
- How will this be policed?
- If a comment sucks it gets downvoted anyway. If it’s thoughtful, the drafting tool and process is kind of beside the point.
Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.
The practical approach is the one HN has always used: judge the content.
Btw, this was co written with ChatGPT. Does that make any difference to anyone?
J/K, actually it was not co written by ChatGPT.
Or maybe it was…
by robotswantdata
0 subcomment
- Welcome change, there is enough AI slop on the internet already.
I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.
Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.
- I hate how easy AI has made outsourcing thinking. You can literally type fragments of a thought into $CHAT_ASSISTANT and get a super polished response back that gets you 99% of the way there. It's almost like we, collectively, looked at the final scene of WALL-E and decided "Yes! Gimme that!"
by shevy-java
0 subcomment
- I've seen AI-generated comments be used quite a lot, even by real people. When asked why they did so, they could not explain it, or claimed "to reduce spelling mistakes". Which makes no sense; real people make spelling mistakes and typos all the time. Why would that warrant the use of AI? To me it seems as if some people are just mega-lazy, so they use AI; and for testing, too. When they do so, though, they waste the time of other humans, as these other humans suddenly have to "interact" with AI, without this being announced. It is a form of cheating, IMO. On youtube you now find many fake-videos created by AI, without announcement - I don't watch these as I consider it cheating too, when not labeled as such. Admittedly it is getting very hard to distinguish what is real and what is fake. There are some ways to find out, but it is getting really hard to distinguish accurately. Sometimes you see e. g. 10 funny animal videos and only 2 are fake-AI, so these people combine cheating with non-cheating. Very annoying - it degrades youtube, which isn't so bad actually since that is owned by evil Google.
- For once I am proud of my aggressive, unfiltered human comments.
- At some point might internet text will just be recognized as meaningless drivel both to bots and humans? a.k.a. dead internet theory... I am curious what organizations would benefit from this. i.e. Who lost legitimacy when the internet became a popular way for people to communicate ideas?
- AI assistance does not eliminate human authorship. A comment may be drafted or refined with tools but still reflect the user’s own ideas and judgment. Prohibiting any AI assistance would be difficult to enforce and would likely exclude normal writing aids that many people already use. The more relevant standard is whether the commenter stands behind the content and participates in the discussion.
- AI comments are certainly bad for discourse on HN. But who's to be the judge of AI or human? Are you reading humanity's Jeff Dean or computerized Elon Musk? It's certainly a tricky situation to be in!
by AndriyKunitsyn
0 subcomment
- What if there was a voluntary indication of LLM content? Like, you press a checkbox "yes, I'm going to post some content that is partially or fully created by AI", and there would be a visible mark "slop" next to a post/comment.
- So is this the AI bubble popping?
I expect Y Combinator to cease and revoke all funding of all companies that leverage LLM technologies that interact with humans.
I wonder if there's an AI-hate movement in China.
by reducesuffering
0 subcomment
- This being 3 years late is indicative of how far HN is falling behind the curve. Do not expect much convo here around software technology to be skating towards the puck. It is increasingly reactive and lagging the frontier, which is a shame from its former self.
by Madmallard
0 subcomment
- What's strange about this is that tons of the upvoted posts on the front-page are LLM generated text
So....?
by notanastronaut
0 subcomment
- >>However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.
Unless you're a billionaire* or a CEO firing off memos where you fire half your company's workforce.
u got to be powerful to puond out a txt this way and have ppl still listen to u.
Otherwise, it is getting dismissed because 'you didn't put enough effort into the comment, so I'm not going to read it.'
That is amusing to me.
*Reference to the analysis performed on the Epstein emails and texts.
- Too bad there isn’t a complementary rule about not asking “is it just me or does this article read like AI slop?”
I’m so over these comments. Sure I can flag them but I feel like it deserves a special call out.
- I won't name where and which one for the obvious reason that you can and should learn to know better, but I observed a comment that was obviously and blatantly copypasted from an agent, with all the signature "it's not just X, it's Y" patterns, the emdash abuse, the "In summary,' section, generating dozens of replies in organic engagement from people who genuinely couldn't tell the difference between a real comment and an aggregation of a prompted, synthetic response.
Whatever happened to "knowing is half the battle?" Why do we accept this kind of intellectual laziness as exemption from a duty to learn and know better?
- Sometimes I collect my comments here to run through my draft writing skill to see how it might shake out as part of a blog post. Doing the opposite would be weird. I earned that karma. It's mine to burn making bad posts.
- You're all a bunch of tedious ignoramuses, your own fields of studying notwithstanding. I'm out here face-to-face with the Bullshit Asymmetry Principles. I'm not about to give up the only leverage I have!
The fact of the matter is that there're not hours enough in the day to read, in realtime, to each and every one of you the reams they've written on why you're wrong. Do I have to establish a tag-team?
The fact is that I've spent thousands upon thousands of hours painstakingly collating the perspectives that I'm now delivering to you—I am a river to my people. And it's only because they pass under the bridge of an LLM that they're objectionable?
This is a bit like challenging your plumber for charging you over a minute's fix, when they've spent 20 years getting it down to that minute.
The work's been done. You're paying for the outcome.
Edit: All fresh off the top of my head, folks.
Ah, that reminds me: I wouldn't feel compelled to do all this refutation if radical reactionary political extremism was properly moderated.
- AI does not have LONG context, Long Term Memories or LONG intentionality -its not aware and it can't remember the plot without being spoonfed the details each time from scratch.
Its like an amnesic genius who once he already wrote a masterpiece and keeps cycling, and looses his train of thought after some fixed amount of time.
This groundhog day effect is mitigated in some respects by code -we create key-value memories and agents and stores and countless ways to connect agents via MCP and platforms/frameworks like A2A and the like but until we solve that longer lived instance problem we won't be able to trust these systems without serious HITL (human oversight)
I think we need models that update their own weights and we need some kind of awareness cycle rather than just a forward pass inference run with a bigger context window
- THANK YOU!!
- Aye
- Sure, ban everyone that uses em dashes from the digital commons. That will certainly stop the existential threat to your livelihood.
Sarcasm aside—there is no reliable way to prove this. So it begs the question: you really care if something is AI generated? Or is this just an another excuse to silence people you don’t like?
You know, those people. The ones who didn’t win a full ride to <prestigious university> or pay a fortune for a sheet of paper. The ones who haven’t spent thousands of man hours handcrafting a <free-and-open-source-cloud-native-hypermedia-aware-RESTful-NoSQL-API> framework implemented in Rustfuck, a new language that you made in your free time that borrows from Rust and Brainfuck (but they wouldn’t know about it).
(this is to anyone reading, mostly rhetorical, not dang in particular)
- Without someone actually saying as such, we only have stuff like emdashes and specific word patterns to go by. And someone even moderately vested in hiding AI in plain sight will coach the LLM to use common vernacular.
And with LLMs making blog posts as diss tracks... damn, who knows what this world is coming to.
But the whole "Only Humans, we dont serve YOUR KIND (clanker) here" is purely performative.
- What about us non native speakers? Who make many grammar and spelling mistakes and welcome the help of an llm in eliminating the erros?
- The obvious way to keep human spaces is via webs-of-trust.
If you play bluegrass or old time (or beopop or hip-hop / proto-hip-hop) or other traditional styles of music where the ensemble is a de facto web-of-trust, join us on pickipedia to build and strenghten it. https://pickipedia.xyz/
- Good addition, but there's little chance this will work out in practice.
Humans with morals follow rules, sometimes. Probabilistic software acting autonomously or following commands from amoral humans doesn't.
- THIS.
by lazzlazzlazz
0 subcomment
- This is a bit sad. The kind of people who post AI generated comments to farm reputation or exert undue influence will not be discouraged by politely asking them to stop. It's a toothless request that will only encourage people who clumsily police each other.
Without some kind of private proof of personhood enforced at the app level, this means nothing.
- This seems like an overcorrection. There is a vast difference between someone copy and pasting from an LLM and using one to correct their English or improve their writing ability.
Rules like this seem to me more like fomenting witch hunting of "AI comments" than it is about improving the dialogue. Just about any place I've seen take this hardline stance doesn't improve, it just becomes filled with more people who want to want to pat each other on the back about how bad AI is.
Just my two cents. I don't filter my comments through any AI, but I am empathetic for people who might have great use of them to connect them to the conversation.
- The link doesn't work perfectly for me, it seems that since the page is already scrolled down all the way to the bottom, there is no way to focus specifically on the #generated element.
by desireco42
0 subcomment
- There were few that were very suspect commenters :). It is an issue for sure.
- Meanwhile, the top comment on one of the most upvoted submissions today is AI generated by an LLM account:
https://news.ycombinator.com/item?id=47334694
Most people don't seem to care.
- Let's take it one step further and add the corollary, "don't submit generated/AI-edited blog posts."
by informal007
0 subcomment
- This reminds me the invitation rules like lobste.rs, but it's not the ideal option
- Conclusion: HN does not, for one, welcome their new AI overlords :)
- Half of this thread is AI assisted writing. lol.
by misiti3780
0 subcomment
- i support this.
- Just speaking honestly
This rule actually says "Don't admit when you are using AI to generate comments and don't admit when you are an AI"
I know it's cynical, but this is as meaningful as reddit's "upvote/downvote is not an agree/disagree or like/dislike button"
People may hate that this is true, but I cannot logically reason out how a rule like this could work. I think it's better to just accept that AI is now part of the circle, until we can figure out a "human check".
by Timothycquinn
0 subcomment
- AI Server Error
- ... --- ... ^_^ %+% -.-. ---?
- It's far from proven or obvious whether involving an LLM in your thought process degrades your thought process.
- ... --- ... %/% %_% ^+?
by whalesalad
0 subcomment
- You're absolutely right!
- I enjoy AI
- em-dash -> permaban?
by nyc_data_geek1
0 subcomment
- Take the slop to moltbook.
- Another solution - in addition or instead - is requiring LLM output to be labeled.
The biggest danger of LLMs is impersonating humans. Obviously they have been carefully constructed to be socially appealing. Think of the motivation behind that:
It is almost completely unnecessary to LLM function and it's main application is to deceive and manipulate. Legal regulation of LLMs should ban impersonation of humans, including anthropomorphism (and so should HN's regulation). Call an LLM 'software' and label it's output as 'output'.
Imagine how many problems would be solved by that rule. Yes, it's not universally enforceable, but attach a big enough penalty and known people and corporations will not do it, and most people will decide it's not worth it.
- But we are missing the point here.
It is not about whether the comment was written by AI, a native English speaker, English major, or ESL.
What matters is an idea or an opinion. That is all what matters.
This is similar to when people check someones post history and if they are pro Trump, they are immediately against their idea or opinion.
by jeffrallen
0 subcomment
- I, for one, welcome my human overlords.
by dogemaster2025
0 subcomment
- I wonder if the rule will be enforced. I see a lot of liberal / socialist / communist / anti Trump / Democratic Party politics in here even though the rule says that “Off-Topic: Most stories about politics”.
by badgersnake
0 subcomment
- Should be unnecessary. If you think otherwise just fuck off.
- Here is one elephant in the room: what is the process behind this guideline / policy? What happens after a comment gets deleted or a person gets banned?
As I understand it, HN moderators are thinking hard about this insane new world.* From my POV, there are a combination of worthy goals: transparency of the process, mechanisms for appeal, overall signal-to-noise ratio, and (something all of us can do better) more empathy and intellectual honestly. It isn't kind to accuse a human being of not being a human being.
If we can't find ways to be kind to people because of the new dynamic, maybe we need to figure out a new dynamic! And it isn't just about individuals; it is about the culture and the system and the technology we're embedded in.
* Aside: I'm not sure that any of us really can grasp the magnitude of what is happening -- this is kuh-ray-Z.
by artemonster
0 subcomment
- I find it interesting that we havent invented a democratic version of policing a rule system. HN is dang, and he is dictator and guardian of these rules, basically. If you replace them with some typical reddit mod HN dies. If you spread out this role to some democratically elected mods via karma system this will fall apart just as quick as StackOverflow did, so, also HN dies.
by lol8675309
0 subcomment
- Lol
- lmfao ycombinator that funds with millions AI companies, holy hypocrites haha
by add-sub-mul-div
3 subcomments
- Is there a site that deserves more than this one to be destroyed by slop? It's hypocritical but telling for the places most actively trying to profit from it to ban it themselves.
- lol, lmao
- HN is leftist echo chamber and down view points they disagree with. Fuck Dang, can’t wait to see this website go to AI slop.
by throwawy9995
0 subcomment
- [dead]
- [dead]
by 0x696C6961
1 subcomments
- You're absolutely right!
by sriramgonella
0 subcomment
- [flagged]
by OhNoNotAgain_99
0 subcomment
- [dead]
by craigmccart
0 subcomment
- [dead]
by JumpingVPN2027
0 subcomment
- [dead]
by humannutsack
0 subcomment
- [dead]
- > Don't post generated comments or AI-edited comments. HN is for conversation between humans.
Where's the curiosity about this world-changing technology? As all the CTOs have recently said: AI use not an option and it must change everything we do. /s
- [dead]
by poopiokaka
0 subcomment
- [dead]
- Doesn’t mean
anything when even one of the first rule is not enforced at all
> Off-Topic: Most stories about politics
by Helloworldboy
0 subcomment
- [dead]
- Love to see it.
The next step is to run Pangram on every post and ban the offenders! Fight AI with AI! /s
In all seriousness, this is one of the few places I trust for genuine conversations with other people. Forums are mostly dead, Reddit is bots-galore, and I'm not signing up for Facebook just for groups.
- You're absolutely right! /s
by huflungdung
0 subcomment
- [dead]
by dinkywonks
0 subcomment
- [dead]
by rightmerit
0 subcomment
- [dead]
by throwaway613746
0 subcomment
- [dead]
- The prompt everyone was using:
"Please generate a response to this and include one or more of the following words: enshitification, slop, ZIRP, Paul Graham, dark patterns, rent seeking, late stage capitalism, regulatory capture, SSO tax, clickbait, did you read the article?, Rust, vibe code, obligatory XKCD, regulations, feudalistic, land value tax"
(/s)
by humannutsack
1 subcomments
- [flagged]
- [flagged]
by mattlondon
0 subcomment
- [flagged]
- [flagged]
by HelloUsername
2 subcomments
- [flagged]
- [flagged]
by julius_eth_dev
5 subcomments
- [flagged]
- Hacker News turning more authoritarian every day. Me thinks Trump should consider annexing it :)
- Also please don't post accusations of comments reeking of AI.
- Pinky swear!
by dopidopHN2
0 subcomment
- You are absolutely right !
by Kim_Bruning
3 subcomments
- I would amend to:
"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."
This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.
(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )
- HN only supports English so it should be allowed for anyone using LLMs for translation.
- Mine understant novell you policy. AI gramair chex no.
- i agree but how is this ever going to be enforced verified? https://proofofhumanity.id/ ?
by notepad0x90
2 subcomments
- This is going to be a tough ask. I am with this 100% for "ai generated" but not "ai edited". What if I'm using AI for spellchecking or correcting bad grammar? what if it is an accessiblity-related use case? or translation?
It's just a tool ffs! there are many issues with LLM abuse, but this sort of over-compensation is exactly the sort of stuff that makes it hard to get abuse under control.
You're still talking with a human!, there is no actual "AI" you're not talking to an actual artificial intelligence. "don't message me unless you've written it with ink, on papyrus". There is a world of difference between grammarly and an autonomous agent creating comments on its own. Specifics, context, and nuance matter.
by stevefan1999
1 subcomments
- I'm sorry, but I would just have to just say no.
## Opposing the Ban on AI-Generated/Edited Comments on HN
*The value of a comment should be judged by its content, not its origin.*
Here are key arguments against this policy:
- *Ideas matter more than authorship.* If a comment is insightful, well-reasoned, and contributes meaningfully to a discussion, dismissing it solely because AI assisted in its creation is a genetic fallacy — judging an argument by its source rather than its merit.
- *We already accept tool-assisted thinking.* People routinely use calculators, search engines, spell-checkers, and reference materials before posting. AI assistance exists on a spectrum with these tools. Drawing a bright line specifically at "AI-edited" is arbitrary when someone could use a thesaurus, Grammarly, or have a friend proofread their comment without objection.
- *It disadvantages non-native speakers.* Many HN users are brilliant engineers and thinkers who don't write fluently in English. AI editing can level the playing field, allowing their ideas to be judged on substance rather than prose quality. This policy inadvertently privileges native English speakers.
- *It's effectively unenforceable.* There is no reliable way to distinguish a lightly AI-polished comment from a naturally well-written one. Unenforceable rules erode respect for the rules that are enforceable and important.
- *The real problem is low-effort content, not the tool used.* What HN actually wants to prevent is shallow, generic, or spammy comments. A policy targeting quality directly (which HN already has) addresses the actual concern better than a blanket tool prohibition.
- *Human intent still drives the conversation.* A person who uses AI to articulate their own idea more clearly is still participating in a human conversation — they're just communicating more effectively. The thought, the intent to engage, and the underlying perspective remain human.
*In short:* This rule conflates the medium with the message and risks excluding valuable contributions in pursuit of an authenticity standard that is both philosophically fuzzy and practically unenforceable.
by petermcneeley
0 subcomment
- There are ways to test for AI but sadly it would probably result in violation of other hn guidelines.
- I have a kid with severe written language issues, and the utilisation of STT w/ a LLM-powered edit has unlocked a whole world that was previously inaccessible.
What is amazing is it would have remained so just a couple of years ago!
- This policy will not age well.
by DonThomasitos
3 subcomments
- The irony is that this guide is written like a system prompt. We‘re all working with LLMs too much these days.
by bachittle
1 subcomments
- If you want your comments to sound more human — stop using em dashes everywhere. LLMs love them — along with neat structure, “furthermore”-style transitions, and perfectly balanced paragraphs.
Humans write a bit messier — commas, short sentences, abrupt turns.
- I decided to break the rules:
Forum mechanics have always shaped discourse more than policies. Voting changed everything. The response to LLMs should be mechanical not moral — soft, invisible weighting against signals correlated with generated text. Imperfect but worth the tradeoff, just like voting.
https://claude.ai/share/9fcdcba8-726b-4190-b728-bb4246ff82cf
- [flagged]
- This seems fine as a short-term solution, but human-only is no good as a long-term rule. The AIs will soon surpass human capability. Even in the present, I think some AI comments are already decent quality. It's just most of them aren't high quality yet.
And I'm worried banning AIs altogether will eventually lead to some form of prove-you-are-human verification to use the site, which will reduce anonymity. Even something seemingly benign like verifying email would mean many unverified accounts like my own will disappear.
And there is a legitimate use for LLM rewrite to counter identification by stylometry, so rewrite shouldn't be banned. I think we'll have to allow the AI stuff at some point, and make a system that incentivizes quality posts regardless of where they come from or how they're written.