Well, this is very interesting, because I'm a native English speaker that studied writing in university, and the deeper I got into the world of literature, the further I was pushed towards simpler language and shorter sentences. It's all Hemingway now, and if I spot an adverb or, lord forbid, a "proceeded to," I feel the pain in my bones.
The way ChatGPT writes drives me insane. As for the author, clearly they're very good, but I prefer a much simpler style. I feel like the big boy SAT words should pop out of the page unaccompanied, just one per page at most.
[0] https://www.theverge.com/features/23764584/ai-artificial-int...
Update: To illustrate this, here's a comparison of a paragraph from this article:
> It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.
And ChatGPT's "improvement":
> This is a new frontier of an old struggle: the struggle to be seen, to be understood, to be granted the easy presumption of humanity that others receive without question. My writing is not the product of a machine. It is the product of history—my history. It carries the echo of a colonial legacy, bears the imprint of a rigorous education, and stands as evidence of the labor required to master the official language of my own country.
Yes, there's an additional em-dash, but what stands out to me more is the grandiosity. Though I have to admit, it's closer than I would have thought before trying it out; maybe the author does have a point.
His responses in Zoom Calls were the same mechanical and sounds like AI generated. I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written and gave points to why it believes this message was AI written.
When I showed the response to the colleague he swore that he was not using ant AI to write his responses. I believe after he said to me it was not AI written. And now reading this I can imagine that it's not an isolated experience.
The formal part resonates, because most non-native english speaker learnt it at school, which teaches you literary english rather than day-to-day english. And this holds for most foreign languages learnt in this context: you write prose, essays, three-part prose with an introduction and a conclusion. I've got the same kind of education in france, though years of working in IT gave me a more "american" english style: straight to the point and short, with a simpler vocabulary for everyday use.
As for whether your writing is ChatGPT: it's definitely not. What those "AI bounty hunters" would miss in such an essay: there is no fluff. Yes, the sentences may use the "three points" classical method, but they don't stick out like a sore thumb - I would not have noticed should the author had not mentioned it. This does not feel like filling. Usually with AI articles, I find myself skipping more than half of each paragraph, due to the information density - just give me the prompt. This article got me reading every single word. Can we call this vibe reading?
I just saw someone today that multiple people accused of using ChatGPT, but their post was one solid block of text and had multiple grammar errors. But they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes.
I'm sure there's some voice actor out there who can't get work because they sound too similar to the generated voices that appear in TikTok videos.
Earlier today I stumbled upon a blog post that started with a sentence that was obviously written by someone with a slavic background (most writers from other language families create certain grammatical patterns when writing in another language, e.g. German is also quite typical). My first thought was "great, this is most likely not written by a LLM".
Some people are perhaps overly focussed on superficial things like em-dashes. The real tells for ChatGPT writing are more subtle -- a tendency towards hyperboly (it's not A, it's [florid restatment of essentially A] B!), a certain kind of rhythym, and frequently a kind of hard to describe "emptiness" of claims.
(LLMs can write in mang styles, but this is the sort of "kid filling out the essay word count" style you get in chatgpt etc by default.)
- Do not confuse 'night' with 'evening'.
- This office spells it 'programme'.
- Hotels are 'kept', not 'run'.
- Dead men do not leave 'wives', but they may leave 'widows'.
- 'Very' is a word often used without discrimination. It is not difficult to express the same meaning when it is eliminated.
- The relative pronoun 'that' is used about three times superfluously to the one time that it helps the sense.
- Do not write 'this city' when you mean Chicago.
Some things I've learned/realized from this thread:
1. You can make an em-dash on Macs using -- or a keyboard shortcut
2. On Windows you can do something like Alt + 0151 which shows why I have never done it on purpose... (my first ever —)
3. Other people might have em-dashes on their keyboard?
I still think it's a relatively good marker for ChatGPT-generated-text iff you are looking at text that probably doesn't apply to the above situations (give me more if you think of them), but I will keep in mind in the future that it's not a guarantee and that people do not have the exact same computer setup as me. Always good to remember that. I still do the double space after the end of a sentence after all.
We will all soon write and talk like ChatGPT. Kids growing up asking ChatGPT for homework help, people use it for therapy, to resumes, for CVs, for their imaginary romantic "friends", asking every day questions from the search engine they'll get some LLM response. After some time you'll find yourself chatting with your relative or a coworker over coffee and instead of hearing, "lol, Jim, that's bullshit" you'll hear something like "you're absolutely right, here let me show you a bulleted list why this is the case...". Even more scarier, you'll soon hear yourself say that to someone, as well.
For sure he describes an education in English that seems misguided and showy. And I get the context - if you don't show off in your English, you'll never aspire to the status of an Englishman. But doggedly sticking to anyone's "rules of good writing" never results in good writing. And I don't think that's what the author is doing, if only because he is writing about the limitations of what he was taught!
So idk maybe he does write like ChatGPT in other contexts? But not on this evidence.
I have seen people use "you're using AI" as a lazy dismissal of someone else's writing, for whatever reasons. That usually tells you more about the person saying it than the writing though.
Beyond these surface level tells though, anyone who's read a lot of both AI-unassisted human writing as well as AI output should be able to pick up on the large amount of subtler cues that are present partly because they're harder to describe (so it's harder to RLHF LLMs in the human direction).
But even today when it's not too hard to sniff out AI writing, it's quite scary to me how bad many (most?) people's chatbot detection senses are, as indicated by this article. Thinking that human writing is LLM is a false positive which is bad but not catastrophic, but the opposite seems much worse. The long term social impact, being "post-truth", seems poised to be what people have been raving / warning about for years w.r.t other tech like the internet.
Today feels like the equivalent of WW1 for information warfare, society has been caught with its pants down by the speed of innovation.
Because while people OBVIOUSLY use dashes in writing, humans usually fell back on using the (technically incorrect) hyphen aka the "minus symbol" - because thats whats available on the keyboards and basically no one will care.
Seems like, in the biggest game of telephone called the internet, this has devolved into "using any form of dash = AI".
Great.
According to russian language wikipedia (https://ru.wikipedia.org/wiki/%D0%94%D0%BE%D0%BA%D0%B0%D0%B7...) the original tale go out to famous Persian poet Rumi from XII century, which just makes me tickled pink about how awesome language is.
I also love and use em-dashes regularly. ChatGPT writes like me.
Just recently I was amazed with how good text produced by Gemini 3 Pro in Thinking mode is. It feels like a big improvement, again.
But we also have to honest and accept that nowadays using a certain kind of vocabulary or paragraph structure will make people think that that text was written by AI.
LLMs - like all tools - reduce redundant & repetitive work. In the case of LLMs it’s now easy to generate cookie cutter prose. Which raises the bar for truly saying something original. To say something original now, you must also put in the work to say it in an original way. In particular by cutting words and rephrasing even more aggressively, which saves your reader time and can take their thinking in new directions.
Change is a constant, and good changes tend to gain mass adoption. Our ancestors survived because they adapted.
‘Striding’ is ‘purposeful’; ‘trudging’ expresses ‘weariness’; ‘ambling’ implies ‘nonchalance’.
Good verb choice reduces adverb dependence.
The exact same problem exists with writing. In fact, this problem seems to exist across all fields: science, for example, is filled with people who have never done a groundbreaking study, presented a new idea, or solved an unsolved problem. These people and their jobs are so common that the education system orients itself to teach to them rather than anyone else. In the same way, an education in literature focused on the more likely traits you’ll need to get a job: hitting deadlines, following the expected story structure, etc etc.
Having confined ourselves to a tiny little box, can we really be surprised that we’re so easy to imitate?
Kenya writes like the British taught before they left, and necessarily they didn't speak or write how they did.
The models mostly say "mat".
Besides, of course what people write will sound as LLMs, since LLMs are trained on what we've been writing on the internet... For us who've been lucky and written a lot and are more represented in the dataset, the writings of LLMs will be closer to how we already wrote, but then of course we get the blame for sounding like LLMs, because apparently people don't understand that LLMs were trained on texts written by humans...
And guess what, when you revise something to be more structured and you do it in one sitting, your writing style naturally gravitates towards the stuff LLMs tend to churn out, even if with less bullet points and em dashes (which, incidentally, iOS/macOS adds for me automatically even if I am a double-dash person).
I don't really understand the aversion some people have to the use of LLMs to generate or refine written communication. It seems trigger the "that's cheating!" outrage impulse.
Unfortunately I think posts like this only seem to detract from valid criticisms. There is an actual ongoing epidemic of AI-generated content on the internet, and it is perfectly valid for people to be upset about this. I don't use the internet to be fed an endless stream of zero-effort slop that will make me feel good. I want real content produced by real people; yet posts like OP only serve to muddy the waters when it comes to these critiques. They latch onto opinions of random internet bottom-feeders (a dash now indicates ChatGPT? Seriously?), and try to minimise the broader skepticism against AI content.
I wonder whether people like the Author will regret their stance once sufficient amount of people are indoctrinated and their content becomes irrelevant. Why would they read anything you have to say if the magic writing machine can keep shitting out content tailored for them 24/7?
e.g. > [...] and there is - in my observational opinion - a rather dark and insidious slant to it
Let's leave it at "insidious" and "in my opinion". Or drop "in my opinion" entirely, since it goes without saying.
Just take one dip and end it.
All the toil of word-smithing to receive such an ugly reward, convincing new readers that you are lazy. What a world we live in.
Seeing a project basically wrapping 100 lines of code with a novel length README ala 'emoticon how does it compare to.. emoticon'-bla bla really puts me off.
Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor."
No. No no no. The next word is "mat"!
They have to actually read material, and not just use the structure as a proxy for ability.
AI companies and some of their product users relentlessly exploit the communication systems we've painstakingly built up since 1993. We (both readers and writers) shouldn't be required to individually adapt to this exploitation. We should simply stop it.
And yes, I believe that the notion this exploitation is unstoppable and inevitable is just crude propaganda. This isn't all that different from the emergence of email spam. One way or the other this will eventually be resolved. What I don't know is whether this will be resolved in a way that actually benefits our society as a whole.
That's just sad. I really feel for this author.
> You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.
This is :
> humanity is now defined by the presence of casual errors, American-centric colloquialisms, and a certain informal, conversational rhythm
And once you start noticing the 'threes', it's fun also.
I think the only solution to this is, people should simply not question AI usage. Pretence is everywhere. Face makeup, dress, the way you speak, your forced smile...
Both aim at using an English that is safe, controlled and policed for fear of negative evaluation.
I don't know the author of this article and so I don't know whether I should feel good or bad about this. LLMs produce better writing than most people can and so when someone writes this eloquently, then most people will assume that it's being produced by LLM. The ride in the closed horse carriage was so comfortable it felt like being in a car and so people assumed it was a car. Is that good? Is that bad?
Also note that LLMs are now much more than just "one ML model to predict the next character" - LLMs are now large systems with many iterations, many calls to other systems, databases, etc.
Let's say you happen to be lucky, don't accuse someone unfairly, and they are using ChatGPT to write what they said. Who cares?! What is it you're doing by "calling them out" ? Winning internet points? Feeling superior? Fixing the world?
https://www.pangram.com/history/282d7e59-ab4b-417c-9862-10d6...
The author's writing style is really similar to AI. AI already somehow passed Turing test. The AI detectors are not that trustworthy (but still useful).
> TECHNICAL DIFFICULTIES PLEASE STAND BY
This actually made me pee myself out loud!
Wanna submit a proof in a criminal case? Better be ready to debunk whether this was made with AI.
AI is going to fuck everything up for absolutely no reason other than profit and greed and I can't fucking wait
If you read some English public school essay by a pupil who has not read their homework, effect is very similar: a lot of complex sentences peppered with non-Celtic words, but utterly without meaning. In simple terms, the writer does not know what the hell they are talking about, although they know how to superficially string words together into a structured and coherent text. Even professional writers do this, when they have a deadline and not a single original idea what to write about.
But we do not write just to fart language on paper or screen, we write to convey a meaning, a message. To communicate. One can of course find meaning from tea leaves and whatnot, but truly it is a communal experience to write with an intention and to desperately try to pass one’s ideas and emotions forward to one’s common enby.
This is what lacks in the million of GPT-generated Linkedin-posts, hecause in the end they are just structure without content, empty shells. Sometimes of course one can get something genuinely good by accident, but it is fairly rare. Usually it is just flexing of syntax in a way both tepid and without heart. And it is unlikely that LLM’s can overcome this hurdle, since people writing without intent cannot either. They are just statistical models guessing words after all.
OK but come ON, that has to have been deliberate!
In addition to the things chatbots have made clichés, the author actually has some "tells" which identify him as human more strongly. Content is one thing. But he also has things (such as small explanations and asides in parentheses, like this) which I don't think I've EVER seen an instruction-tuned chatbot do. I know I do it myself, but I'm aware it's a stylistic wart.
On that regard, I have an anecdote not from me, but from a student of mine.
One of the hats I wear is that of a seminary professor. I had a student who is now a young pastor, a very bright dude who is well read and is an articulate writer.
"It is a truth universally acknowledged" (with apologies to Jane Austen) that theological polemics can sometimes be ugly. Well, I don't have time for that, but my student had the impetus (and naiveté) of youth, and he stepped into several ones during these years. He made Facebook posts which were authentic essays, well argued, with balanced prose which got better as the years passed by, and treating opponents graciously while firmly standing his own ground. He did so while he was a seminary student, and also after graduation. He would argue a point very well.
Fast forward to 2025. The guy still has time for some Internet theological flamewars. In the latest one, he made (as usual) a well argued, long-form Facebook post, defending his viewpoint on some theological issue against people who have opposite beliefs on that particular question. One of those opponents, a particularly nasty fellow, retorted him with something like "you are cheating, you're just pasting some ChatGPT answer!", and pasted a screenshot of some AI detection tool that said that my student's writing was something like "70% AI Positive". Some other people pointed out that the opponent's writing also seemed like AI, and this opponent admitted that he used AI to "enrich" some of his writing.
And this is infuriating. If that particular opponent had bothered himself to check my student's profile, he would have seen that same kind of "AI writing" going on back to at least 2018, when ChatGPT and the likes were just a speck in Sam Altman's eye. That's just the way my student writes, and he does in this way because the guy actually reads books, he's a bonafide theology nerd. Any resemblance of his writing to a LLM output is coincidence.
In my particular case, this resonated with me because as I said, I also tend to write in a way that would resemble LLM output, with certain ways to structure paragraphs, liberal use of ordered and unordered lists, etc. Again, this is infuriating. First because people tend to assume one is unable to write at a certain level without cheating with AI; and second, because now everybody and their cousin can mimic something that took many of us years to master and believe they no longer need to do the hard work of learning to express themselves on an even remotely articulate way. Oh well, welcome to this brave new world...
Just the other week a client reached out and asked a bunch of questions that resulted in me writing 15+ SQL queries (not small/basic ones) from scratch and then doing some more math/calculations on top of that to get the client the numbers they were looking for. After spending an hour or two on it and writing up my response they said something to the effect up "Thanks for that! I hope AI made it easy to get that all together!".
I'm sure they were mostly being nice and trying (badly) to say "I hope it wasn't too much trouble" but it took me a few iterations to put together a reply that wasn't confrontational. No, I didn't use AI, mostly because they absolutely suck at that kind of thing. Oh, they might spit of convincing SQL statements, those SQL statements might even work and return data, but the chance they got the right numbers is very low in my experience (yes, I've tried).
The nuance in a database schema, especially one that's been around for a while and seen its share of additions/migrations/etc, is something LLMs do not handle well. Sure, if you want a count of users an LLM can probably do that, but anything more complicated that I've tried falls over very quickly.
The whole ordeal frustrated me quite a bit because it trivialized and minimized what was real work that I did (non-billed work, trying to be nice). I wouldn't do this because I'm a professional but there was a moment when I thought "Next time I'll just reply with AI Slop instead and let them sort it out". It really took the wind out of my sails and made me regret the effort I put into getting them the data they asked for.
X isnt just Y its a <description> Z!
Basically, for two reasons:
1) A giant portion of all internet text was written by those same folks. 2) Those folks are exactly the people anyone would hire to RLHF the models to have a safe, commercially desirable output style.
I am pretty convinced the models could be more fluent, spontaneous and original, but then it could jeopardize the models' adoption in the corporate world, so, I think the labs intentionally fine-tuned this style to death.
It feels very natural to me. But if everyone and their mother considers it a "giveaway", I'd be a fool not to consider it. * sigh *
I'm not Kenyan, but I was raised in a Canadian family of academics, where mastering thoughtful – but slightly archaic – writing was expected of me. I grew up surrounded by books that would now be training material, and who's prose would likely now be flagged as ChatGPT.
Just another reason to hate all this shit.
I regularly find myself avoiding the use of the em-dash now even though it is exactly what I should be writing there, for fear of people thinking I used ChatGPT.
I wish it wasn't this way. Alas.
Thankfully, no one I report to internally wants me to simplify my English to prevent LLM accusations. The work I do requires deliberate use of language.
The other day I saw and argued with this accusation by a HN commenter against a professional writer, based on the most tenuous shred of evidence: https://news.ycombinator.com/item?id=46255049
chatgpt revolutionized my work because it makes creating those bland texts so much easier and fast. it made my job more interesting because i don't have to care about writing as much as before.
to those who complain about ai slop, i have nothing to say. english was slop before, even before ai, and not because of some conspiracy, but because the gatekeepers of journals and scientific production already wanted to be fed slop.
for sure society will create others, totally idiosyncratic ways to generate distinction and an us vs others. that's natural. but, for now, let's enjoy this interregnum...
What LLMs also do though, is use em-dashes like this (imagine that "--" is an em-dash here): "So, when you read my work--when you see our work--what are you really seeing?"
You see? LLMs often use em-dashes without spaces before and after, as a period replacement. Now that is only what an Oxford professor would write probably, I've never seen a human write text like that. So those specific em-dashes is a sure sign of a generated slop.