If using AI to write is nothing to be ashamed of, then you shouldn't feel the need to hide it. If it is something to be ashamed of, then you should stop doing it. If someone objects to you poisoning a well, the correct response is not to use a more subtle poison.
I often hear this here: "if you don't bother writing, why should I bother reading?" In fact, save us some time and just share the prompt.
If anyone who works on LLMs is reading, a question: When we've tried base models (no instruction tuning/RLHF, just text completion), they show far fewer stylistic anomalies like this. So it's not that the training data is weird. It's something in instruction-tuning that's doing it. Do you ask the human raters to evaluate style? Is there a rubric? Why is the instruction tuning pushing such a noticeable style shift?
[1] https://www.pnas.org/doi/10.1073/pnas.2422455122, preprint at https://arxiv.org/abs/2410.16107. Working on extending this to more recent models and other grammatical features now
It also struggles to maintain deep coherence. This is all probably related. It might be very hard or impossible to have deep coherence without human-like goals, memory, or sense of self.
I'll give some examples. Some will be from this list of "AI writing tropes" and some will be from prominent human-written (prior to 2020) sources. Guess which is which (answer at the bottom).
- "Let's explore this idea further."
- "workload creep"
- "Navigating the complex landscape of "
- "Let's delve into the details"
And I'm not going to get into how silly this is as a so-called LLM trope: "Every bullet point or list item starts with a bolded phrase or sentence." I remember reading paperbacks published before the first PC that used this style.
Fractal summaries is literally how composition is taught to students. Avoiding that style will make the writing more likely to sound less like a person wrote it.
I would suggest the author upgrade this to a modern version of Strunk & White and go on a mission to sell that. Call it Anti-Corpspeak or whatever. But don't pretend that these formulations only arrived in bulk in the last 2-3 years.
ANSWER KEY: these are all obviously prominent in text published before LLMs hit, as well as in the tropes doc. They are no more signs of LLM-generated text than is the practice of using nouns, verbs, and adjectives to convey ideas.
> Add this file to your AI assistant's system prompt or context to help it avoid common AI writing patterns.
So if I put this into my LLM's conversation it is like I am instructing it to put this into its AI assistant's system prompt, so the AI assistant's AI assistant.
The alternative is to say:
"Here is a list of common AI tropes for you to avoid"
All tropes are described for me to understand what that AIs do wrong:
> Overuse of "quietly" and similar adverbs to convey subtle importance or understated power.
But this in fact instructs the assistant to start overusing the word 'quietly' rather than stop overusing it.
This is then counteracted a bit with the 'avoid the following...' but this means the file is full of contradictions.
Instead you'd need to say:
"Don't overuse 'quietly', use ... instead"
So while this is a great idea and list, I feel the execution is muddled by the explanation of what it is. I'd separate the presentation to us the user of assistants and the intended consumer, the actual assistants.
I've had claude rewrite it and put it in this gist:
https://gist.github.com/abuisman/05c766310cae4725914cd414639...
Also whoever claims "no human writes like this" hasn't been to LinkedIn... though the humanity of those writers might be debatable. But all the vapidity, all the pointless chatter to fill up time and space, it learned that from us.
I wouldn't have delegated this to an AI. Human for human, human for AI.
> Honestly? We should address X first. It's a genuine issue and we've found a real bug here.
Honorable mention: "no <thing you told me not to do>". I guess this helps reassure adherence to the prompt? I see that one all the time in vibe coded PRs.
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Another one that seems impossible for LLMs to avoid: breaking article into a title and a subtitle, separated by a colon. Even if you explicitly tell it not to, it'll do it.
I can understand someone needing help with writing but getting an agent to do the job for you feels like a personal defeat.
It makes a tremendous difference. Almost everything on this list is the emotional fluff ChatGPT injects to simulate a personality.
If you can convince people that SVO is a distinctly AI pattern it's an automatic win.
Negative parallelism is a staple of briefs. "This case is not about free speech. It is about fraud." It does real work when you're contesting the other side's framing.
Tricolons and anaphora are used as persuasion techniques for closing arguments and appellate briefs.
Short punchy fragments help in persuasive briefs where judges are skimming. "The statute is unambiguous."
As with the em dash - let's not throw the baby out with the bath water.
This one hit home... the first time I ever saw Claude do it I really liked it. It's amazing how quickly it became the #1 most aggravating thing it does just through sheer overuse. And of course now it's rampant in writing everywhere.
If AI finally gets rid of the thing that drove me nuts for years: "leverage" as a verb mean roughly "to use"—when no human intervention seems to work, then I shall be over-the-moon happy. I once worked at a place where this particular word was lever—er, used all the damn time and I'd never encountered something so NPC-ish. I felt like I was on The Twilight Zone. I could've told you way back then that you sounded like a bot doing that, now people might actually believe me and thank god.
I will stick by the em dashes however. And I might just start using arrows too. Compose - > → right arrow. Not even difficult.
'you must be mad'. Aggressively hilarious. Love it!
Kind of like enforcing linting or pre-commit checks but for prose.
One I've seen Gemini using a lot is the "I'll shoot straight with you" preamble (or similar phrasing), when it's about to tell me it can't answer the question.
“We’ve all been there.”
“Your first instinct might be…”
“Now you have a…”
It does not seem like there are lots of people who are perversely inclined to write a story with all these tropes and words in it, but surely there must be some, because if you make something that beats the LLM (by being creatively good) using all the crap the LLM uses, it would seem some sort of John Henry triumph (discounting the final end of John Henry of course, which is a real downer)
>> "How would you organize these LLM quirks, ontologically speaking? I have this notion that the better path is to identify what kinds of things are emerging and prompt to do those things better; accept it as something LLMs are going to do and treat it as something to improve on instead of something to eliminate."
The output is a bit better on blind prompting with applying the results. Here's the gist:
1. Compression artifacts — the model encoding structure implicitly
2. Attention-economy mimicry — the model trained on engagement-optimized writing
3. False epistemic confidence — the model performing knowledge it doesn't have
4. Affective prosthetics — the model simulating emotional register it can't inhabit
5. Mechanical coherence substitutes — the model managing the problem of continuity
Spot corrections are too spotty. Going higher levels with these kinds of problems seems to work better.
More generally, it's interesting that many different LLMs have differences in their favorite tropes but converge on broadly similar patterns. Of course ChatGPT and its default persona (you can choose others in the settings, but most people don't do that) is overrepresented in these examples. For example, the article doesn't mention the casual/based tone of Grok that often feels somewhat forced.
Show HN: Tropes.fyi – Name and shame AI writing - https://news.ycombinator.com/item?id=47088813 - Feb 2026 (3 comments)
I hope ossa-ma sees this second round!
It's a bold strategy cotton. Bold of you to say that. Wild how mundane things get call wild. Thay're making calling things wild their entire personality. In that case, by your logic, (least generous misrepresentation of your logic).
- “The Pledge”:…
- “The Turn”:…
- “The Prestige”:…
(For this particular example I used real terms from the stage magic world, at least according to Christopher Nolan’s film, as it captures the same meaningless-to-the-uninitiated quality.)
I mean, "tapestry" is a great word for something that is interconnected. Why not use it?
All those tropes have their place in certain contexts. AI overusing them is because they have no memory across all they've written.
Each conversion is a new chat so it's like "I haven't used delve in a while, think I'll roll out that bad boy"
And then you try to fix this by telling it what not to do which doesn't work very well, so...
No thanks, I hate this large scale social experiment
I understand the sentiment. Meaning I think I understand some of the underlying frustration. But I don't care for the tone or the framing or the depth of analysis (for there isn't much there; I've seen the "if you didn't write it, why should I read it" cliché before *, and it ain't the only argument in town). Now for my detailed responses:
1. In the same way the author wants people to respect other people, I want the author to respect the complexity of the universe. I'm not seeing that.
2. If someone says "I wrote this without any LLM assistance" but do so anyway, THAT is clearly deceptive.
3. If you read a page that was created with LLM assistance, it isn't reasonable for you to say the creator was being deceptive just because you assumed. It takes two to achieve deception: both the sender and the receiver.
4. If you read a page on the internet, it is increasingly likely there was no human in the loop for the article at all. Good luck tracing the provenance of who made the call to make it happen. It might well be downstream of someone's job. (Yes, we can talk about diffusion of responsibility, etc., that's fair game -- but if you want to get into the realm of moral judgments, this isn't going to be a quick and tidy conversation)
5. I think the above comment puts too much of a "oh the halcyon days!" spin on this. Throughout history, many humans, much of the time, are largely repackaging things we had heard before. Unfortunately (or just "in reality") more of us are catching on to just how memetically-driven people are. We are both individuals and cogs. It is an uncomfortable truth. That brainwashed uncle you have is almost certainly a less reliable source of information than Claude.
6. The web has crappy incentives. It sucks. Yes, I want people to behave better. That would be nice, but I can't realistically expect people to behave better on the web unless there are incentives and consequences that align with what I want. The Web is a dumpster fire, not because of bad individuals, but because of system dynamics. Incentives. Feedback.
7. If people communicate more clearly, with fewer errors, that's at least a narrow win. One has to at least factor this in.
8. People accusing other people of being LLMs has a cost. Especially when people do it overconfidently or in a crude or mean manner. I've been on the receiving end. Why? Because I write in a way that sometimes triggers people because it resembles how LLMs write.
* I want to read high quality things. I actually care less if you wrote it as bullet points, with the help of an LLM, on a napkin, on a posterboard ... my goal is to learn from something suited to some purpose. I'm happy reading a computer-generated chart. I don't need a human to do that by hand.
The previous paragraph attempts to gesture at some of the conceptual holes in the common arguments behind "if you want a human to read it, a human should right it": they aren't systematically nor rigorously "wargamed" or "thought-experimented"; they are mostly just "knee-jerked".
I am quite interested in many things, including: (1) connecting with real people; (2) connecting with real people that don't merely regurgitate an information source they just ingested; (3) having an intelligent process generating the things I read. As an example of the third, I want "intelligent" organizations that synthesize contributions from their constituent parts. I want "intelligent" algorithms to help me focus on what matters to me. &c.
If a machine does that well, I'm not intrinsically bothered. If a human collaborates with an LLM to do that, fine. Whatever. We have bigger problems! Much bigger ones.
Yes, I want to live in a world where humans are valued for what they write and their intrinsic qualities, even as machines encroach on what used to be our biggest differentiator: intelligence itself. But wanting this and morally shaming people for not doing it doesn't seem like a good way to actually make it happen. Getting to that world, to my eye, requires public sense-making, grappling with the reality of how the world works, forming coalitions, organizing society, and passing laws.
Yes, I understand that HN has a policy that people write their own stuff, and I do. (See #8 above as well as my about page.)
Thank you to the approximately zero or maybe one person who made it this far. I owe you a beer. You can easily find me. I'm serious. But then we have to find a way to have a discussion while enjoying a beer on a video call. Alas.
I expect better from people -- and unfortunately a lot of people's output is lower quality than what I get from Claude. THIS is what pisses me off: that a machine-curated output is actually more useful to me than a vast majority of what people say, at least when I have particular questions to ask. This is one or many uncomfortable realities I would like people need to not flinch away from. As far as intelligent output is concerned, humans are losing a lot of ground. And fast. Don't shoot the messenger. If you don't recognize this, you might have a rather myopic view of intelligence that somehow assumes it must be biological or you just keep moving the goalposts. Or that somehow (but how?) humans "have it" but machines can't.