But for general use, I think this is misguided. The problem with LLM output is not that it's using em dashes or words such as "crucial". It's that most LLM articles on LinkedIn or on personal blogs just take a one-sentence prompt and dress it up into a lot of pointless words, wasting everyone's time: "I had a shower thought and I asked a chatbot to write five pages of text about it." I don't need prettier words, I need there to be far fewer of them?
On the flip side, if you're a human and actually have something of consequence to say, "delve" all you want.
And looking at its suggestions, they are not very good. People are better developing their own writing style than trusting generic advice meant for common-denominator writing.
(For stories with multiple protagonists, the common choices that seem to work best for readers are 3 or 5. Humans are weird.)
I suspect that LLMs use that rule so much, because it's so common in their training data, for good reasons.
Update: 13 patterns in 800 words for Samuel Clemens. Apparently he's an em-dash abuser, but also likes "filler adverbs", "triple constructions" and "anaphora abuse". Damn!
And for Mr. Hemingway we have 43 patterns in 1600 words. 16 filler adverbs, 5 triple constructions, 5 staccato bursts, and 14 question then answer. My my...
Inputting Japanese sentences of any length flags the whole sentence as "Dramatic Fragment: A standalone paragraph with ≤4 words".
Otherwise, almost nothing. I don't know if it's because it's specialized on English or if learning it as a second language makes it really unnatural?
I just write my text without too much thought about it and I get a rewritten version that is usually clearer, but not pedantic or overly verbose.
It particular helps for English text as it is not my first language
We are moving to a point in time, where we don't care if the PR was written by AI. We care that the author understand what is about, that it tested it and in general, we want the ownership.
With articles is the same. I don't care if it was written by AI, if the content is interesting, and ai make it easier to digest... That's a win win.
The problem is not the presentation. Is the content.
Paste AI generated text and get a more human sounding version? That’s just AI generated text with extra steps.
Seems like a sad situation, but I'm not going to start changing my communication style to avoid sounding like an LLM. At least not yet.
Ultimately slop is so pervasive that I'm wasting a fair amount of time vetting text and it's affecting my ability to simply enjoy reading. I keep getting part way into an article before realizing it's low quality ai writing. Being able to get a quick heads up that it looks like ai before starting would save me a lot of energy even on articles I decide to try reading because it cuts down on mental overhead.
Also, it was painful to learn that my very first blog post I wrote in 2013 is AI generated. But I'm fine with it because I read this:
> A short punchy opener (≤10 words) followed by two or more substantially longer elaboration sentences — the LLM "hook then evidence pile" rhythm.
... and realized that the entire app is AI generated.
Every single article out there is now structured as: - THE problem - THE solution - THE proof - Why it matters
Thought this was a NY Post-style headline about FBI "top cop" Kash Patel's drinking problem: https://www.theatlantic.com/politics/2026/04/kash-patel-fbi-...
Until now, ideas were only relevant when the owner was able to communicate then regardless of the impact of the idea.
LLM "democratize"(VC term) sharing ideas, as people with low communication skills can be heard.
It should loop the LLM’s results back on itself repeatedly, behind the scenes, until its writing is free of signs of slop. After your quality gates pass and the result is presented, it’d be cool to then see a visualization of each of the agent’s drafts that the user can page through to watch how the writing was gradually incrementally improved by the model!
No need to keep a human in the writing-improvement loop. Just show it when it’s slop free.
I'm building writetrack.dev - a writing signal sdk that helps folks understand proof of process. It takes a different approach to writing analysis and I'm pretty sure the logo will never feature a brown turd.
> Overused Intensifier - Delete it. If the sentence still makes sense, the word was never needed. If it doesn't, rewrite the sentence to show why it matters.
You heard it here first. Adjectives? More like AIdjectives, a covert plan by AI companies to make our writing more sloppy. According to this recommendation, writing should never have any emphasis, it should only contain the most basic "X is Y" relations, like in some programming language. Sentences should contain the bare minimum amount of information required to parse them, everything else must be cut. In practice, this recommendation only filters a few of the most pervasive 'corporate PowerPoint'-style language, but even then, the suggestion that these words are never useful is wrong.
> Triple Construction - Break the pattern. Use two items or four. Or convert one item into its own sentence to give it more weight.
Humans may really like when things are structured into threes, but you must resist this AI temptation! Use two or four points, because you're not like them. The only reason cited for why this is wrong is that LLMs use this pattern often, so naturally the rest of us must cede good writing practices to them.
> "Almost" Hedge - Commit. "Almost always" → "usually." Or just say "always" and defend the claim. Readers notice when you won't take a stance.
As we all know, the world is discrete and easy to describe. That's why there simply isn't anything between things that happen "usually" (70%) and "always" (100%). Saying "almost always" (95%) is bad, because you should round your estimates and defend what is now an obviously wrong statement, for it makes you seem more brutal and confident.
> "Broader Implications" - State the implication explicitly, or cut the phrase. "This has broader implications" says nothing. What are the implications? Say them.
God forbid you organize an essay that's in any way non-linear, temporarily withholding some information for the sake of organization. Asking to can the phrase entirely says that even complex writing should be strung together in a rigid and sequential order.
That's the problem with the project, the way I see it. It was too heavily inspired by Grammarly and the likes, and in chasing it, the criticisms were adapted to fit the Grammarly model. The issue with that LLM 'style' is the punchy, continuous overuse of these patterns to the point where these phrases start seeming like meaningless sound combinations. There's nothing wrong with most of these patterns individually, what I hate is when text is filled with them to the brim, not when they show at all. If your writing is like the example paragraph, with most of the text highlighted, then it's a sign that your essay is more rhetoric than substance. But if you write an argument with three items in it and it's highlighted because "that's like AI" to make you delete it, then that's performative self-censorship, not improving your writing.
And good to know that Teddy Roosevelt was not an LLM: https://www.trcp.org/2011/01/18/it-is-not-the-critic-who-cou...
Slop is stopped by allowing unique quirks to flourish. Do you speak in 'staccato bursts'? THEN FUCKING WRITE IN STACCATO BURSTS! Do you need a 'throat clearing opener? THEN FUCKING USE ONE!
Human language does not need to take progressive steps toward some universal standard. Having one is fine, in theory, but the beauty lies in how we solve for our inability to consistently utilize it. Adding mechanism to every step removes the beauty. Stop being the problem.
I'm so over this idiocy. It's gotten to the point that the "haha, gotcha!" AI claims are more annoying than AI slop itself. God forbid you use a semicolon or an em dash or an interesting sentence structure to break things up, because someone will be quick to point out the "proof" that it's machine generated.
Always gotta have In This AI Era of Ours. Because even if you fail to convince the reader of the point you ostensibly were trying to make you still get to tediously skull-bang about The AI Era. And it only costs tokens.
> Staccato Burst Three or more consecutive very short sentences at matching cadence.
This is real. It’s not your imagination. AI is here and eating your lunch/AI is psychologically draining/The unemployment lines are unusually long.
I had a suspicion that a friend was using AI to respond to my texts///and this said he was!!!
It caught a "Short-Hook Paragraph" and "Negation Pivot" and "Staccato Burst" in one text.
Wonderful tool!
I read this before but I have some doubts. I recall some companies that were surprised when suddenly the prices were increased. Usual examples include Amazon, Google and some more, but this can happen to any company, including AI slop master companies. I am not at all claiming that the AI slop has zero use cases, of course - there are use cases, so I don't deny that. But the assumption generated here by AI slop, claiming how all the problems will soon have been solved, and risk-free profits are to be made by all companies, is just rubbish nonsense. AI slop is a big liar. In fact: I am beginning to believe that the current US administration is an AI slop brigade. Every time the stock market yields some suspicious profits, it seems to be that the AI slop protects some thieves here.
Now I have a name for the thing I despise the most about AI writing.
This doesn't detect AI slop. It's just a grammarly/copilot clone.
Yes, I see the message about it staying local. No, I don't trust the message or that you will never be hacked.