Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.
GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.
LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).
Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
What personally disturbs me the most is the self censorship that was initially brought forward by TikTok and quickly spread to other platforms - all in the name of being as advertiser friendly as possible.
LinkedIn was the first platform where I really observed people losing their unique voice in favor of corporate friendly - please hire me - speak. Now this seems to be basically any platform. The only platform that seems to be somewhat protected from it is Reddit, where many mods seem to dislike LLMs as much as everybody else. But more likely, its just less noticeable
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
Nudge to post more of my own mess this week...
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
The few ones who have something important to say they will, and we will listen regardless of the medium.
Worse is better.
A unique, even significantly superior, voice will find it hard to compete against the pure volume of terrible non unique LLM generated voices.
Worse is better.
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I promise to tell the truth.
Improve grammar and typos in my draft but don't change my writing style.
Your mileage may vary.There's a lot of talk over whether LLMs make discourse 'better' or 'worse', with very little attention given to the crisis we were having with online discourse before they came around. Edelman was astroturfing long before GPT. Fox 'news' and the spectrum of BS between them and the NYT (arranged by how sophisticated they considered their respective pools of rubes to be) have always, always been propaganda machines and PR firms at heart wearing the skin of journalism like buffalo bill.
We have needed to learn to think critically for a very long time.
Consider this; if you are capable of reading between the lines, and dealing with what you read or hear on the merits of the thoughts contained therein, then how are you vulnerable to slop? If it was written by an AI (or a reporter, or some rando on the internet) but contains ideas that you can turn over and understand critically for yourself, is it still slop? If it's dumb and it works, it's not dumb.
I'm not even remotely suggesting that AI will usher in a flood of good ideas. No, it's going to be used to pump propaganda and disseminate bullshit at massive scale (and perhaps occasionally help develop good ideas).
We need to inoculate ourselves against bullshit, as a society and a culture. Be a skeptic. Ironnman arguments against your beliefs. Be ready to bench test ideas when you hear them and make it difficult for nonsense to flourish. It is (and has been) high time to get loud about critical thinking.
In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.
Here's why:
And now when I see these emoji fests I instantly lose interest and trust in the content of the email. I have to spend time sifting through the fluff to find what’s actually important.
LLMs are creating an assymetric imbalance in effort to write vs effort to read. What is taking my coworkers probably a couple minutes to draft requires me 2-3x as long to decipher. That imbalance used to be the opposite.
I’ve raised the issue before at work and one response I got was to “use AI to summarize the email.” Are we really spending all this money and energy on the worlds worst compression algorithm?
Social media already lost that nearly two decades ago - it died as content marketing rose to life.
Don't blame on LLMs what we've long lost due to cancer that is advertising[0].
And don't confuse GenAI as a technology with what the cancer of advertising coopts it to. The root of the problem isn't in the generative models, it's in what they're used for - and the problem uses aren't anything new. We've been drowning in slop for decades, it's just that GenAI is now cheaper than cheap labor in content farms.
--
[0] - https://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
2) People who use LLMs for understanding
I think I'll stick to 2) for many reasons.
there's enough potential and wiggle room but people align, even when they don't, just to align.
when Rome was flourishing, only a few saw what was lingering in the cracks but when in flourishing Rome ...
Of course there are also horrible use of AI, liars, scummy cheaters and fake videos on youtube, owned by a greedy mega-corporation that sold its soul to AI. So the bad use cases may be higher than the good use cases, but there are good use cases, and the "losing our voice to LLMs" isn't a whole view of it, sorry.
Skill becomes expensive mechanized commodity
old code is left to rot while people try to survive
we lose our history, we lose our dignity.
If you really have no metrics to hit (not even the internal craving for likes), then it doesn't make much sense to outsource writing to LLMs.
But yes, it's sad to see that your original stuff is lost in the sea of slop.
Sadly, as long as there will be money in publishing, this will keep happening.
Even before LLMs, if you wanted to be a big content creator on YouTube, Instagram, tiktok..., you better fall in line and produce content with the target aesthetic. Otherwise good luck.
* 28% of U.S. adults are at or below "level 1" literacy, essentially meaning people unable to function in an environment that requires written language skills.
* 54% of U.S. adults read below a sixth-grade level.
These statistics refer to an inability to interpret written material, much less create it. As to the latter, a much smaller percentage of U.S. adults can compose a coherent sentence.
We're moving toward a world where people will default to reliance on LLMs to generate coherent writing, including college students, who according to recent reports are sometimes encouraged to rely on LLMs to complete their assignments.
If we care to, we can distinguish LLM output from that of a typical student: An LLM won't make the embarrassing grammatical and spelling errors that pepper modern students' prose.
Yesterday I saw this headline in a major online media outlet: "LLMs now exceed the intelect [sic] of the average human." You don't say.
We improve our use of words when we work to improve our use of words.
We improve how we understand by how we ask.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
And that too is an expression of their own agency. #Laissez-faire
We've proved we can sort of value it, through supporting sustainability/environmental practices, or at least _pretending to_.
I just wonder, what will be the "Carbon credits" of the AI era. In my mind a dystopian scheme of AI-driven companies buying "Human credits" from companies that pay humans to do things.
I suppose when your existence is in the cloud, the fall back to earth can look scary. But it's really only a few inches down. You'll be ok.
Predictably, this has turned into a horror zone of AI written slop that all sounds the same, with section titles with “clever” checkbox icons, and giant paragraphs that I will never read.
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.