So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.
I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.
So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.
Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).
We meatbags are not the audience.
I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.
Having your own way of writing always felt personal in which you expressed your feelings most of the time.
The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).
We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.
Mind you this person is an excellent writer, they had great success with ghost writing and running a small news website where they wrote and curated articles. But for some reason the opportunity for Claude to write stuff they can never have the time for is too great for them to ignore.
I don't care if you used AI for 99.99% of your research for writing the content but when I read your content it should be written by you. It's why I never take any article seriously on linkedin, even before AI, they all lack any personalization.
Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.
(AI adding tests seems like a good use, not sure what's meant by scaffolding)
> Why should I bother to read something someone else couldn't be bothered to write?
Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?
This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.
AI bloats text and every other task it does into convoluted redundant cliches. This is true for text and code. Whether it was written by an AI or not, it's not worth my time. If you wrote it 100% by hand and it still sounds like AI, it's still bad writing and still not worth my time.
I can take the other person's prompt and run it through an LLM myself and proceed from there.
I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.
For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.
Shouldn’t we bother to write these things?
I am the first person to respect craft in many domains, and will continue to do so.
I respect it when an actor does their own stunts or when directors choose not to use CGI.
But I will still watch the Matrix and think "holy shit that was cool".
It's all about the quality of the output.
I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.
If someone wants to me read a giant text generated by a small and poor prompt, I don't wanna read it
If someone wants to fix that by increasing the effort and do a better prompt and express better the ideas, I rather read that prompt over the llm output
In that sense, they are essentially systems that mimic online content.
Therefore, what an AI generates often reflects the perspectives of the people who originally created the training data, rather than the true thoughts of the person prompting it.
Personally I find it super helpful to discuss stuff back and forth: It takes a view, explores the code and brings some insight. I take a view and steer the analysis. And together we arrive at a conclusion.
By that point the AI’s got so much context it typically does a great job summarising the thought process for wider discussion so I can tweak and polish and share.
I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.
These blanket binary takes are tiresome. There is nuance and rough edges.
I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.
Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.
Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.
The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.
How we can tell that this wasn't written by an LLM.
ai;dr is what I'm going to start saying, it's just frustrating to see.
But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.
Write it first, quick self edit, then have an LLM edit. Then I edit again. It's most definitely my voice, and I love it.
https://www.thenewatlantis.com/publications/one-to-zero
Semantic information, you see, obeys a contrary calculus to that of physical bits. As it increases in determinacy, so its syntactical form increases in indeterminacy; the more exact and intentionally informed semantic information is, the more aperiodic and syntactically random its physical transmission becomes, and the more it eludes compression. I mean, the text of Anna Karenina is, from a purely quantitative vantage of its alphabetic sequences, utterly random; no algorithm could possibly be generated — at least, none that’s conceivable — that could reproduce it. And yet, at the semantic level, the richness and determinacy of the content of the book increases with each aperiodic arrangement of letters and words into coherent meaning.
Edit: add-onIn other words, it is impossible for an LLM (or monkeys at keyboards [0]) to recreate Tolstoy because of the unique role our minds play in writing. The verb writing hardly appears to apply to an LLM when we consider the function it is actually doing.
You don't; you feed it to an LLM and ask it to read it for you.
Conclusion:
Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.
Premises:
1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.
2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.
3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.
Assumptions:
1. AI-generated arguments can have true premises and sound inferences
2. The genetic fallacy is a legitimate logical error to avoid
3. Source-based dismissals are categorically inappropriate in logical evaluation
4. AI should be treated as equivalent to any other source when evaluating arguments
This! This is my feeling exactly. I wrote about encountering work slop last year: https://lambdaland.org/posts/2025-08-04_artifical_inanity/
Chicken.
Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?
I know it’s just modern writing style to preempt all responses. But can’t you just plainly state your business without professing your appreciation?
People who waste other’s time with bullshit are aholes. I don’t care if it’s My Great Friend And Partner in Crime, Anthropics LLM or it’s a tedious template written in PHP with just enough substitutions and variations to waste five sentences on it before closing it.
Actually, saying that it’s the same thing is a bit like saying “guns don’t shoot people”. At least you had to copy-paste that PHP template from somewhere and adapt it to your spam. Back in the day.
> I can't imaging writing code by myself again
After that, you say that you need to know the intention for "content".
I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".
I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.
Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.
https://noonker.github.io/posts/2024-07-25-i-respect-our-sha...
> ..and call me an AI luddite
Oh please do call me an AI luddite. It's an honor for me.
If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.
Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.
This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.
But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.
This is an easy but not very insightful framing.
I want to read intelligent, thoughtful text that is useful in some way: to me, to society, to humanity. Ceteris paribus, the source of the information does not necessarily matter; it only matters as a matter of association. To put it another way, “human” vs “machine” is not the core driving factor for me.
All other things equal, I would rather read A over B:
A. high quality AI content, even if it is “only” the result of 6 minutes of human question framing and light editing [1]
B. low quality purely human content, even if it was the result of 60 minutes of effort.
There is increasingly less ability to distinguish “human” writing from “AI” writing. Some people fool themselves on their AI-detection prowess.
To be direct: I want meaningful and satisfying lives for humans. If we want to reward humans for writing more, we better reflect on why, and if we still really want that, we better find ways that work. I don’t think “buy local” as a PR campaign will be easily transferred to a “read human” movement.
[1]: Of course AI training data is drawn from humans, so I do not discount the human factor. My point is that quantifying the effort put into it is not simple.
Also you could long use "logit_bias" in the API of models which supported it to ban the EM dash, ban the word "not", ban semicolons, and ban the "fancy quotes" that were clearly added by "those who need to watch" to make sure that they can clearly figure out if you used an LLM or not.
I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.
This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.
The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.