Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.
The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.
I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.
It can write about a spark, but the content has no spark.
The #1 point really: have access to data / experiences / expert knowledge that's unique & can't be distilled from public sources and/or scraped from the internet. This has always been the case. It just holds more weight when AI agents are everywhere.
If you're worried about producing "content", the completion bots have caught up with you.
See the other posts calling the article "a Linkedin post". Those were slop even before LLMs.
Now if you have some information you want to share, that's another topic...
I see LLMs more and more like a mirror - if YOU can orchestrate high-level knowledge and have a brutally clear vision of what you want and prompt correspondingly, things will go well for you (I suppose this all comes back to 'context engineering' just with higher specificity on what you are actually prompting) turns out domain knowledge, time/experience-built wisdom, and experience in niches, whatever they may be - will and always will be valuable!
LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.
I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.
I don't know antything about marketing given that the first paragraph of blog post makes it clear its from marketing context.
But, as a user or literally just a bystander. using AI isn't really good. I mean for linked-in posts I guess. Isn't the whole point to stand out by not using AI in Linked-in.
Like I can see a post which can have an ending by,
Written with love & a passion by a fellow human, Peace.
And It would be a better / different than this.
Listen man, I am from Third world country too and I had real issues with my grammar. Unironically this was the first advice that I got from people on HN and I was suddenly conscious about it & I tried to improve.
Now I get called AI slop for writing how I write. So to me, its painful to see that my improvement in this context just gets thrown out of the window if someone calls some comment I write here or anywhere else AI slop.
I guess I have used AI and I have pasted in my messages in there to find that it can write like me but I really don't use that (on HN, I only used it once for testing purpose on one discord user iirc) but my point is, I will write things myself and if people call me AI slop, I can really back down the claim that its written by a human, just ask me anything about it.
I don't really think that people who use AI themselves are able to say something back if someone critiques something as AI slop.
I was talking to a friend once, we started debating philosophy. He gave me his medium article, I was impressed but observed the --, I asked him if it was written by AI, he said that it was his ideas but he wrote/condensed it with AI (once again third world country and honestly same response as the original person)
And he was my friend, but I still left thinking hmm, If you are unable to take time with your project to write something, then that really lessens my capacity to read it & I even said to him that I would be more interested in reading his prompts & just started discussing the philosophy itself with him.
And honestly the same point goes to AI generated code projects even though I have vibe coded many but I am unable to read them or find the will to read it at many times if its too verbose or not to my liking. Usually though in that context, its more so just prototypes for personal use case but I still end up open sourcing if someone might be interested ig given it costs nothing for me to open source it.